If you wish to make an apple pie from scratch, you must first invent the universe.
In this book, we’ve been trying to understand computation by building models of it. So far, we’ve modelled computation by designing simple imaginary machines with various constraints, and seen that different constraints produce systems with different amounts of computational power.
The Turing machines from Chapter 5 are interesting because they’re able to implement complex behavior without relying on complex features. Equipped with just a tape, a read/write head, and a fixed set of rules, Turing machines have enough flexibility to simulate the behavior of machines with better storage capabilities, or nondeterministic execution, or any other fancy feature we might want. This tells us that full-blown computation doesn’t require a machine with a lot of underlying complexity, just the ability to store values, retrieve them, and use them to make simple decisions.
Models of computation don’t have to look like machines; they can look like programming languages instead. The Simple programming language from Chapter 2 can certainly perform computation, but it’s not elegant in the way that a Turing machine is. It already has plenty of syntax—numbers, Booleans, binary expressions, variables, assignments, sequences, conditionals, loops—and we haven’t even started to add the features that would make it suitable for writing real programs: strings, data structures, procedure calls, and so on.
To turn Simple into a genuinely useful programming language would be hard work, and the resulting design would contain a lot of incidental detail and not reveal very much about the basic nature of computation. It would be more interesting to start from scratch and create something minimal, a Turing machine of the programming language world, so that we can see which features are essential for computation and which are just incidental noise.
In this chapter, we’re going to investigate an extremely minimal programming language called the untyped lambda calculus. First, we’ll experiment with writing programs in a dialect of Ruby that approximates the lambda calculus by using as few language features as possible; this will still just be Ruby programming, but imposing imaginary constraints gives us an easy way to explore a restricted semantics without having to learn a whole new language. Then, once we’ve seen what this very limited feature set is capable of, we’ll take those features and implement them as a standalone language—with its own parser, abstract syntax, and operational semantics—using the techniques we’ve learned in earlier chapters.
To see how programming in a minimal language can work, let’s try solving a problem in Ruby without taking advantage of its many helpful features. Naturally, that means no gems, no standard library, and no modules, methods, classes, or objects, but since we’re trying to be as minimal as possible, we’ll also avoid the use of control structures, assignment, arrays, strings, numbers, and Booleans.
Of course, there won’t be a language left to program in if we avoid absolutely every feature of Ruby, so here are the ones we’re going to keep:
Referring to variables
Creating procs
Calling procs
That means we can only write Ruby code that looks like this:
->
x
{
->
y
{
x
.
call
(
y
)
}
}
This is roughly how untyped lambda calculus programs look, and that’s a good enough approximation for our purposes. We’ll look at the lambda calculus itself in more detail in Implementing the Lambda Calculus.
To make our code shorter and easier to read, we’re also going to allow ourselves to use constants as abbreviations: if we create a complex expression, we can assign it to a constant to give it a short name that we can reuse later. Referring to the name is no different from retyping the original expression again—the name just makes the code less verbose—so we won’t be making ourselves dependent upon Ruby’s assignment features. At any time, we can decide to be more strict and undo the abbreviations by replacing each constant with the proc it refers to, at the expense of making our programs much longer.
Since we’re going to be building entire programs out of procs, let’s spend a minute looking at their properties before we dive into using them.
For the moment, we’re still using full-featured Ruby to illustrate the general behavior of procs. We won’t impose the restrictions until we start writing code to tackle The Problem.
Procs are plumbing for moving values around programs. Consider what happens when we call a proc:
->
x
{
x
+
2
}
.
call
(
1
)
The value that’s provided as an argument to the call, in this
case 1
, flows
into the parameter of the block, in this case
x
, and then flows out
of the parameter to all the places where it’s used, so Ruby
ends up evaluating 1 + 2
. It’s the
rest of the language that does the actual work; procs just connect
parts of the program together and make values flow to where they’re
needed.
This already doesn’t bode well for our experiment in minimal Ruby. If procs can only move values between the pieces of Ruby that actually do something with them, how are we ever going to be able to build useful programs out of procs alone? We’ll get to that once we’ve explored some other properties of procs.
Procs can take multiple arguments, but this isn’t an essential feature. If we’ve got a proc that takes multiple arguments…
->
x
,
y
{
x
+
y
}
.
call
(
3
,
4
)
…we can always rewrite it as nested single-argument procs:
->
x
{
->
y
{
x
+
y
}
}
.
call
(
3
)
.
call
(
4
)
Here, the outer proc takes one argument, x
, and
returns the inner proc, which also takes one argument, y
. We can call the outer proc with a value for x
and then call the inner proc with a value for y
, and we get the same result as in the multiargument case.[43]
Since we’re trying to remove as many features of Ruby as possible, let’s restrict ourselves to creating and calling single-argument procs; it won’t make things much worse.
The only way to find out about the code inside a proc is to call it, so two procs are interchangeable if they produce identical results when called with the same arguments, even if their internal code is different. This idea of treating two things as equal based on their externally visible behavior is called extensional equality.
For example, say we have a proc p
:
>>
p
=
->
n
{
n
*
2
}
=> #<Proc (lambda)>
We can make another proc, q
,
which takes an argument and simply calls p
with it:
>>
q
=
->
x
{
p
.
call
(
x
)
}
=> #<Proc (lambda)>
p
and q
are
obviously two different procs, but they’re extensionally equal, because they do exactly
the same thing for any argument:
>>
p
.
call
(
5
)
=> 10
>>
q
.
call
(
5
)
=> 10
Knowing that p
is equivalent to -> x { p.call(x) }
opens up new opportunities for
refactoring. If we see the general pattern -> x { p.call(x)
}
in our program, we may choose to eliminate it by replacing the whole
expression with just p
, and under certain circumstances
(which we’ll see later), we might decide to go in the other direction too.
Ruby provides a choice of syntax for creating and calling procs. From this point
onward, we’ll use ->
to create a proc and square brackets to
call it:arguments
{
body
}
>>
->
x
{
x
+
5
}
[
6
]
=> 11
This makes it easy to see the body and argument of the proc without too much extra syntax getting in the way.
Our goal is to write the well-known FizzBuzz program:
Write a program that prints the numbers from 1 to 100. But for multiples of three, print “Fizz” instead of the number, and for the multiples of five, print “Buzz.” For numbers that are multiples of both three and five, print “FizzBuzz.”
—Imran Ghory, Using FizzBuzz to Find Developers who Grok Coding
This is an intentionally simple problem, designed to test whether an interview candidate has any programming experience at all. Anybody who knows how to program should be able to solve it without much difficulty.
Here’s an implementation of FizzBuzz in full-featured Ruby:
(
1
.
.
100
)
.
each
do
|
n
|
if
(
n
%
15
)
.
zero?
puts
'FizzBuzz'
elsif
(
n
%
3
)
.
zero?
puts
'Fizz'
elsif
(
n
%
5
)
.
zero?
puts
'Buzz'
else
puts
n
.
to_s
end
end
This isn’t the cleverest implementation of FizzBuzz—there are plenty of clever ones out there—but it’s a straightforward one that anyone could write without thinking about it.
However, this program contains some puts
statements, and we have no way to
print text to the console using only procs,[44] so we’re going to replace it with a roughly equivalent
program that returns an array of strings rather than printing
them:
(
1
.
.
100
)
.
map
do
|
n
|
if
(
n
%
15
)
.
zero?
'FizzBuzz'
elsif
(
n
%
3
)
.
zero?
'Fizz'
elsif
(
n
%
5
)
.
zero?
'Buzz'
else
n
.
to_s
end
end
This is still a meaningful solution to the FizzBuzz problem, but now it’s one that we have a chance of implementing using only procs.
Despite its simplicity, this is quite an ambitious program if we
don’t have any of the features of a programming language: it creates a
range, maps over it, evaluates a big conditional, does some arithmetic
with the modulo operator, uses the Fixnum#zero?
predicate, uses some string
literals, and turns numbers into strings with Fixnum#to_s
. That’s a fair amount of built-in
Ruby functionality, and we’re going to have to strip it all out and
reimplement it with procs.
We’re going to start by focusing on the numbers that appear in FizzBuzz. How can
we possibly represent numbers without using Fixnum
s or any of the other datatypes that
Ruby provides?
If we’re going to try to implement numbers[45] from scratch, we’d better have a solid understanding of what we’re implementing. What is a number, anyway? It’s hard to come up with a concrete definition that doesn’t accidentally assume some aspect of what we’re trying to define; for example, “something that tells us how many…” is not very useful, because “how many” is really just another way of saying “number.”
Here’s one way of characterizing numbers: imagine we have a bag of apples and a bag of oranges. We take an apple out of one bag, an orange out of the other, and put them aside; then we keep taking out an apple and an orange together until at least one of the bags is empty.
If both bags become empty at the same time, we’ve learned something interesting: in spite of containing different things, those bags had some shared property that meant they became empty at the same time; at every point during the procedure of repeatedly removing an item from each bag, either both bags were nonempty or both bags were empty. This abstract property shared by the bags is what we can call a number (although we don’t know which number!), and we can compare these bags with any other bag in the world to see if it has the same “number” as them.
So one way to characterize numbers is by repetition (or iteration) of some action, in this case, taking an item from a bag. Each number corresponds to a unique way of repeating an action: the number one corresponds to just performing the action; the number two corresponds to performing it and then performing it again; and so on. The number zero, unsurprisingly, corresponds to not performing the action at all.
Since making and calling procs are the only “actions” our
program can perform, we can try implementing a number
n
with code that repeats the action of calling a proc
n
times.
For example, if we were allowed to define methods—which we’re not,
but play along—then we could define #one
as a method that takes a proc and some
arbitrary second argument, and then calls the proc with that argument
once:
def
one
(
proc
,
x
)
proc
[
x
]
end
We could also define #two
,
which calls the proc once and then calls it again with whatever the
result of calling it the first time was:[46]
def
two
(
proc
,
x
)
proc
[
proc
[
x
]]
end
And so on:
def
three
(
proc
,
x
)
proc
[
proc
[
proc
[
x
]]]
end
Following this pattern, it’s natural to define #zero
as a method that takes a proc and some other argument, ignores the proc entirely (i.e.,
calls it zero times), and returns the second argument untouched:
def
zero
(
proc
,
x
)
x
end
All of these implementations can be translated into methodless
representations; for example, we can replace the method #one
with a proc that takes two
arguments[47] and then calls the first argument with the second one.
They look like this:
ZERO
=
->
p
{
->
x
{
x
}
}
ONE
=
->
p
{
->
x
{
p
[
x
]
}
}
TWO
=
->
p
{
->
x
{
p
[
p
[
x
]]
}
}
THREE
=
->
p
{
->
x
{
p
[
p
[
p
[
x
]]]
}
}
This avoids functionality that we’re not allowed to use, and instead gives names to procs by assigning them to constants.
This technique of representing data as pure code is named Church encoding after Alonzo Church, the inventor of the lambda calculus. The encoded numbers are Church numerals, and we’ll shortly be seeing examples of Church Booleans and Church pairs.
Now, although we’re eschewing Ruby’s features inside our FizzBuzz solution, it would be useful to translate these foreign representations of numbers into native Ruby values once they’re outside our code—so that they can be usefully inspected on the console or asserted against in tests, or at least so that we can convince ourselves that they really do represent numbers in the first place.
Fortunately we can write a #to_integer
method that performs this
conversion:
def
to_integer
(
proc
)
proc
[->
n
{
n
+
1
}
][
0
]
end
This method takes a proc that represents a number and calls it
with another proc (which just increments its argument) and the native
Ruby number 0
. If we call #to_integer
with ZERO
then, because of ZERO
’s definition, the incrementing proc
doesn’t get called at all and we get an untouched Ruby 0
back:
>>
to_integer
(
ZERO
)
=> 0
And if we call #to_integer
with
THREE
, the incrementing proc gets
called three times and we get a Ruby 3
back:
>>
to_integer
(
THREE
)
=> 3
So these proc-based representations really do encode numbers, and we can convert them into a more practical representation whenever we want to.
For FizzBuzz, we need the numbers five, fifteen, and one hundred, which can all be implemented with the same technique:
FIVE
=
->
p
{
->
x
{
p
[
p
[
p
[
p
[
p
[
x
]]]]]
}
}
FIFTEEN
=
->
p
{
->
x
{
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
x
]]]]]]]]]]]]]]]
}
}
HUNDRED
=
->
p
{
->
x
{
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
p
[
x
]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
}
}
These aren’t very compact definitions, but they work, as we can
confirm with #to_integer
:
>>
to_integer
(
FIVE
)
=> 5
>>
to_integer
(
FIFTEEN
)
=> 15
>>
to_integer
(
HUNDRED
)
=> 100
So, going back to the FizzBuzz program, all of the Ruby numbers can be replaced with their proc-based implementations:
(
ONE
.
.
HUNDRED
)
.
map
do
|
n
|
if
(
n
%
FIFTEEN
)
.
zero?
'FizzBuzz'
elsif
(
n
%
THREE
)
.
zero?
'Fizz'
elsif
(
n
%
FIVE
)
.
zero?
'Buzz'
else
n
.
to_s
end
end
As promised, we’re writing ONE
instead of -> p { -> x { p[x] } }
, and so on, to
make the code clearer.
Unfortunately, this program doesn’t work anymore, because we’re now using operations
like ..
and %
on the
proc-based implementations of numbers. Because Ruby doesn’t know how to treat these as
numbers it’ll just blow up: TypeError: can't iterate from
Proc
, NoMethodError: undefined method `%'
for #<Proc (lambda)>
, and so on. We need to replace all of the
operations to work with these representations—and we can only use procs to do it.
Before we can reimplement any of the operations, though, we need
implementations of true
and false
.
How can we represent Booleans using only procs? Well, Booleans exist solely to be used in conditional statements, and in general, a conditional says “if some Boolean then this else that”:
>>
success
=
true
=> true
>>
if
success
then
'happy'
else
'sad'
end
=> "happy"
>>
success
=
false
=> false
>>
if
success
then
'happy'
else
'sad'
end
=> "sad"
So the real job of a Boolean is to allow us to choose between two options, and we can take advantage of this by representing a Boolean as a proc that chooses one of two values. Instead of thinking of a Boolean as a lifeless piece of data that can be read by some future code to decide which of two options to choose, we’ll just implement it directly as a piece of code that, when called with two options, either chooses the first option or chooses the second option.
Implemented as methods, then, #true
and #false
could be:
def
true
(
x
,
y
)
x
end
def
false
(
x
,
y
)
y
end
#true
is a method that takes
two arguments and returns the first one, and #false
takes two arguments and returns the
second. This is enough to give us crude conditional behavior:
>>
success
=
:true
=> :true
>>
send
(
success
,
'happy'
,
'sad'
)
=> "happy"
>>
success
=
:false
=> :false
>>
send
(
success
,
'happy'
,
'sad'
)
=> "sad"
As before, it’s straightforward to translate these methods into procs:
TRUE
=
->
x
{
->
y
{
x
}
}
FALSE
=
->
x
{
->
y
{
y
}
}
And just as we defined #to_integer
as a sanity check, to make sure it
was possible to convert proc-based numbers into Ruby numbers, so we can
define a #to_boolean
method that can
turn the TRUE
and FALSE
procs into Ruby’s native true
and false
objects:
def
to_boolean
(
proc
)
proc
[
true
][
false
]
end
This works by taking a proc that represents a Boolean and calling it with true
as its first argument and false
as its second. TRUE
just returns its
first argument, so to_boolean(TRUE)
will return true
, and likewise for FALSE
:
>>
to_boolean
(
TRUE
)
=> true
>>
to_boolean
(
FALSE
)
=> false
So representing Booleans with procs is surprisingly easy, but for FizzBuzz, we don’t
just need Booleans, we need a proc-only implementation of Ruby’s if
-elsif
-else
. In fact, because of the way these Boolean implementations work, it’s easy
to write an #if
method too:
def
if
(
proc
,
x
,
y
)
proc
[
x
][
y
]
end
And that’s easy to translate into a proc:
IF
=
->
b
{
->
x
{
->
y
{
b
[
x
][
y
]
}
}
}
Clearly IF
doesn’t need to do any useful work,
because the Boolean itself picks the right argument—IF
is
just sugar—but it looks more natural than calling the Boolean directly:
>>
IF
[
TRUE
][
'happy'
][
'sad'
]
=> "happy"
>>
IF
[
FALSE
][
'happy'
][
'sad'
]
=> "sad"
Incidentally, this means we can revise the definition of #to_boolean
to use IF
:
def
to_boolean
(
proc
)
IF
[
proc
]
[
true
][
false
]
end
While we’re refactoring, it’s worth noting that the implementation of IF
can be cleaned up significantly, because it contains some
procs that are equivalent to simpler ones, as discussed in Equality.
For example, look at IF
’s innermost proc:
->
y
{
b
[
x
][
y
]
}
This code means:
Take an argument y
.
Call b
with x
to get a proc.
Call that proc with y
.
Steps 1 and 3 are dead wood: when we call this proc with an
argument, it just passes it on to another proc. So the whole thing is
equivalent to just step 2, b[x]
, and
we can remove the dead wood in the implementation of IF
to make it simpler:
IF
=
->
b
{
->
x
{
b
[
x
]
}
}
We can see the same pattern again in what’s now the innermost proc:
->
x
{
b
[
x
]
}
For the same reason, this proc is the same as just b
, so we can simplify IF
even further:
IF
=
->
b
{
b
}
We’re not going to be able to simplify it any more than that.
IF
doesn’t do anything
useful—it’s TRUE
and FALSE
that do all the work—so we
could simplify further by getting rid of it
altogether. But our goal is to translate the original FizzBuzz
solution into procs as faithfully as possible, so it’s convenient to
use IF
to remind us where the
if
-elsif
-else
expression appeared in the original,
even though it’s purely decorative.
Anyway, now that we have IF
, we
can go back to the FizzBuzz program and replace the Ruby if
-elsif
-else
with nested calls to IF
:
(
ONE
.
.
HUNDRED
)
.
map
do
|
n
|
IF
[
(
n
%
FIFTEEN
)
.
zero?
][
'FizzBuzz'
][
IF
[
(
n
%
THREE
)
.
zero?
][
'Fizz'
][
IF
[
(
n
%
FIVE
)
.
zero?
][
'Buzz'
][
n
.
to_s
]]]
end
Our next job is to replace Fixnum#zero?
with a proc-based implementation that will work with proc-based numbers.
The underlying algorithm of #zero?
for Ruby values is something like this:
def
zero?
(
n
)
if
n
==
0
true
else
false
end
end
(This is more verbose than is necessary, but it’s explicit about what happens: compare
the number with 0
; if it’s equal, then return true
; otherwise, return false
.)
How can we adapt this to handle procs instead of Ruby numbers? Look at our implementation of numbers again:
ZERO
=
->
p
{
->
x
{
x
}
}
ONE
=
->
p
{
->
x
{
p
[
x
]
}
}
TWO
=
->
p
{
->
x
{
p
[
p
[
x
]]
}
}
THREE
=
->
p
{
->
x
{
p
[
p
[
p
[
x
]]]
}
}
⋮
Notice that ZERO
is the only number that doesn’t call
p
—it just returns x
—whereas all of the other numbers call p
at
least once. We can take advantage of this: if we call an unknown number with TRUE
as its second argument, it’ll return TRUE
immediately if the number is ZERO
. If it’s not ZERO
, then it’ll return
whatever calling p
returns, so if we make p
a proc that always returns FALSE
, we’ll get the behavior we want:
def
zero?
(
proc
)
proc
[->
x
{
FALSE
}
][
TRUE
]
end
Again, it’s easy to rewrite this as a proc:
IS_ZERO
=
->
n
{
n
[->
x
{
FALSE
}
][
TRUE
]
}
We can use #to_boolean
on the
console to check that it works:
>>
to_boolean
(
IS_ZERO
[
ZERO
]
)
=> true
>>
to_boolean
(
IS_ZERO
[
THREE
]
)
=> false
That’s working fine, so in FizzBuzz, we can replace all of the calls to #zero?
with IS_ZERO
:
(
ONE
.
.
HUNDRED
)
.
map
do
|
n
|
IF
[
IS_ZERO
[
n
%
FIFTEEN
]
][
'FizzBuzz'
][
IF
[
IS_ZERO
[
n
%
THREE
]
][
'Fizz'
][
IF
[
IS_ZERO
[
n
%
FIVE
]
][
'Buzz'
][
n
.
to_s
]]]
end
We have usable data in the form of numbers and Booleans, but we don’t have any data structures for storing more than one value in an organized way. We’ll soon need some kind of data structure in order to implement more complex functionality, so let’s pause briefly to introduce one.
The simplest data structure is a pair, which is like a two-element array. Pairs are quite easy to implement:
PAIR
=
->
x
{
->
y
{
->
f
{
f
[
x
][
y
]
}
}
}
LEFT
=
->
p
{
p
[->
x
{
->
y
{
x
}
}
]
}
RIGHT
=
->
p
{
p
[->
x
{
->
y
{
y
}
}
]
}
The purpose of a pair is to store two values and provide them again later on request. To
construct a pair, we call PAIR
with two values, an
x
and a y
, and it
returns its inner proc:
->
f
{
f
[
x
][
y
]
}
This is a proc that, when called with another proc f
,
will call it back with the earlier values of x
and
y
as arguments. LEFT
and RIGHT
are the operations that pick out the left and
the right element of a pair by calling it with a proc that returns its first or second
argument respectively. It all works simply enough:
>>
my_pair
=
PAIR
[
THREE
][
FIVE
]
=> #<Proc (lambda)>
>>
to_integer
(
LEFT
[
my_pair
]
)
=> 3
>>
to_integer
(
RIGHT
[
my_pair
]
)
=> 5
This very simple data structure is enough to get us started; we’ll use pairs later, in Lists, as a building block for more complex structures.
Now that we have numbers, Booleans, conditionals, predicates, and pairs, we’re almost ready to reimplement the modulo operator.
Before we can do anything as ambitious as taking the modulo of two numbers, we need to be able to perform simpler operations like incrementing and decrementing a single number. Incrementing is fairly straightforward:
INCREMENT
=
->
n
{
->
p
{
->
x
{
p
[
n
[
p
][
x
]]
}
}
}
Look at how INCREMENT
works: we
call it with a proc-based number n
,
and it’ll return a new proc that takes some other proc p
and some arbitrary second argument x
, just like numbers do.
What does this new proc do when we call it? First it calls
n
with p
and x
—since n
is a number, this means “call p
,
n
times, on x
,” just as the original number would have
done—and then calls p
one more time
on the result. Overall, then, this is a proc whose first argument gets
called n + 1
times on its second
argument, which is exactly how to represent the number n + 1
.
But what about decrementing? This looks like a much harder problem: once a proc has
already been called n
times, it’s easy enough to add an
extra call so that it’s been called n + 1
times, but
there’s no obvious way to “undo” one of them to make n -
1
calls.
One solution is to design a proc that, when called n
times on some initial argument, returns the number n - 1
.
Fortunately, pairs give us a way of doing exactly that. Think about what this Ruby method
does:
def
slide
(
pair
)
[
pair
.
last
,
pair
.
last
+
1
]
end
When we call slide
with a
two-element array of numbers, it returns a new two-element array
containing the second number and the number that’s one greater than it;
if the input array contains consecutive numbers,
the effect is that of “sliding” a narrow window up the number
line:
>>
slide
(
[
3
,
4
]
)
=> [4, 5]
>>
slide
(
[
8
,
9
]
)
=> [9, 10]
This is useful to us, because by starting that window at -1
, we can arrange
a situation where the first number in the array is one less than the
number of times we’ve called slide
on it, even though
we’re only ever incrementing numbers:
>>
slide
(
[-
1
,
0
]
)
=> [0, 1]
>>
slide
(
slide
(
[-
1
,
0
]
))
=> [1, 2]
>>
slide
(
slide
(
slide
(
[-
1
,
0
]
)))
=> [2, 3]
>>
slide
(
slide
(
slide
(
slide
(
[-
1
,
0
]
))))
=> [3, 4]
We can’t do exactly this with proc-based numbers, because we don’t have a way of
representing -1
, but what’s interesting about slide
is that it only looks at the second number in the array
anyway, so we can put any dummy value—say, 0
—in place of
-1
and still get exactly the same result:
>>
slide
(
[
0
,
0
]
)
=> [0, 1]
>>
slide
(
slide
(
[
0
,
0
]
))
=> [1, 2]
>>
slide
(
slide
(
slide
(
[
0
,
0
]
)))
=> [2, 3]
>>
slide
(
slide
(
slide
(
slide
(
[
0
,
0
]
))))
=> [3, 4]
This is the key to making DECREMENT
work: we can turn slide
into a proc, use the proc representation
of the number n
to call slide
n
times on a pair of ZERO
s, and then
use LEFT
to pull out the left number
from the resulting pair.
SLIDE
=
->
p
{
PAIR
[
RIGHT
[
p
]][
INCREMENT
[
RIGHT
[
p
]]]
}
DECREMENT
=
->
n
{
LEFT
[
n
[
SLIDE
][
PAIR
[
ZERO
][
ZERO
]]]
}
Here’s DECREMENT
in
action:
>>
to_integer
(
DECREMENT
[
FIVE
]
)
=> 4
>>
to_integer
(
DECREMENT
[
FIFTEEN
]
)
=> 14
>>
to_integer
(
DECREMENT
[
HUNDRED
]
)
=> 99
>>
to_integer
(
DECREMENT
[
ZERO
]
)
=> 0
The result of DECREMENT[ZERO]
is actually just the dummy left element from the initial PAIR[ZERO][ZERO]
value, which doesn’t get
SLIDE
called on it at all in this
case. Since we don’t have negative numbers, 0
is the closest reasonable answer we can
give for DECREMENT[ZERO]
, so using
ZERO
as the dummy value is a good
idea.
Now that we have INCREMENT
and
DECREMENT
, it’s possible to implement
more useful numeric operations like addition, subtraction,
multiplication, and exponentiation:
ADD
=
->
m
{
->
n
{
n
[
INCREMENT
][
m
]
}
}
SUBTRACT
=
->
m
{
->
n
{
n
[
DECREMENT
][
m
]
}
}
MULTIPLY
=
->
m
{
->
n
{
n
[
ADD
[
m
]][
ZERO
]
}
}
POWER
=
->
m
{
->
n
{
n
[
MULTIPLY
[
m
]][
ONE
]
}
}
These implementations are largely self-explanatory. If we want to
add m
and n
, that’s just “starting with m
, INCREMENT
it n
times,” and likewise for subtraction; once
we have ADD
, we can multiply m
and n
by
saying “starting with ZERO
, ADD
m
to it
n
times,” and similarly for
exponentiation with MULTIPLY
and
ONE
.
In Reducing expressions, we’ll get Ruby to work through the
small-step evaluation of ADD[ONE][ONE]
to show how it
produces TWO
.
That should be enough arithmetic to get us started, but before we can implement %
with procs, we need to know an algorithm for performing the
modulo operation. Here’s one that works on Ruby’s numbers:
def
mod
(
m
,
n
)
if
n
<=
m
mod
(
m
-
n
,
n
)
else
m
end
end
For example, to calculate 17
modulo 5
:
If 5
is less than or equal to 17
, which it is, then subtract 5
from 17
and call #mod
again with the result, i.e. try 12
modulo 5
.
5
is less than or equal to 12
, so try 7
modulo
5
.
5
is less than or equal to 7
, so try 2
modulo
5
.
5
is
not less than or equal to 2
, so return the result 2
.
But we can’t implement #mod
with procs yet, because
it uses another operator, <=
, for which we don’t yet
have an implementation, so we need to digress briefly to implement <=
with procs.
We can begin with what looks like a pointlessly circular
implementation of #less_or_equal?
for
Ruby numbers:
def
less_or_equal?
(
m
,
n
)
m
-
n
<=
0
end
This isn’t very useful, because it begs the question by relying on <=
, but it does at least recast the problem in terms of two
other problems we’ve already looked at: subtraction and comparison with zero. Subtraction
we’ve already dealt with, and we’ve done comparison for equality with
zero, but how do we implement less-than-or-equal-to zero?
As it happens we don’t need to worry about it, because zero is already the smallest
number we know how to implement—recall that our proc-based numbers are the nonnegative
integers—so “less than zero” is a meaningless concept in our number system. If we use
SUBTRACT
to subtract a larger number from a smaller
one, it’ll just return ZERO
, because there’s no way for
it to return a negative number, and ZERO
is the closest
it can get:[48]
>>
to_integer
(
SUBTRACT
[
FIVE
][
THREE
]
)
=> 2
>>
to_integer
(
SUBTRACT
[
THREE
][
FIVE
]
)
=> 0
We’ve already written IS_ZERO
, and since SUBTRACT[m][n]
will return ZERO
if m
is less than or equal to n
(i.e., if n
is at least as
large as m
), we have enough to implement #less_or_equal?
with procs:
def
less_or_equal?
(
m
,
n
)
IS_ZERO
[
SUBTRACT
[
m
][
n
]]
end
And let’s turn that method into a proc:
IS_LESS_OR_EQUAL
=
->
m
{
->
n
{
IS_ZERO
[
SUBTRACT
[
m
][
n
]]
}
}
Does it work?
>>
to_boolean
(
IS_LESS_OR_EQUAL
[
ONE
][
TWO
]
)
=> true
>>
to_boolean
(
IS_LESS_OR_EQUAL
[
TWO
][
TWO
]
)
=> true
>>
to_boolean
(
IS_LESS_OR_EQUAL
[
THREE
][
TWO
]
)
=> false
Looks good.
This gives us the missing piece for our implementation of #mod
, so we can rewrite it with procs:
def
mod
(
m
,
n
)
IF
[
IS_LESS_OR_EQUAL
[
n
][
m
]][
mod
(
SUBTRACT
[
m
][
n
]
,
n
)
][
m
]
end
And replace the method definition with a proc:
MOD
=
->
m
{
->
n
{
IF
[
IS_LESS_OR_EQUAL
[
n
][
m
]][
MOD
[
SUBTRACT
[
m
][
n
]][
n
]
][
m
]
}
}
Great! Does it work?
>>
to_integer
(
MOD
[
THREE
][
TWO
]
)
SystemStackError: stack level too deep
No.
Ruby dives off into an infinite recursive loop when we call MOD
, because our
translation of Ruby’s native functionality into procs has missed something important about
the semantics of conditionals. In languages like Ruby, the if
-else
statement is nonstrict (or
lazy): we give it a condition and two blocks, and it evaluates the
condition to decide which of the two blocks to evaluate and return—it never evaluates
both.
The problem with our IF
implementation is that we can’t take advantage of the lazy behavior
that’s built into Ruby if
-else
; we just say “call a proc, IF
, with two other procs,” so Ruby charges
ahead and evaluates both arguments before IF
gets a chance to decide which one to
return.
Look again at MOD
:
MOD
=
->
m
{
->
n
{
IF
[
IS_LESS_OR_EQUAL
[
n
][
m
]][
MOD
[
SUBTRACT
[
m
][
n
]][
n
]
][
m
]
}
}
When we call MOD
with values
for m
and n
, and Ruby starts evaluating the body of the
inner proc, it reaches the recursive call to MOD[SUBTRACT[m][n]][n]
and immediately starts
evaluating it as an argument to pass to IF
, regardless of whether IS_LESS_OR_EQUAL[n][m]
evaluated to TRUE
or FALSE
. This second call to MOD
results in another unconditional recursive
call, and so on, hence the infinite recursion.
To fix this, we need a way of telling Ruby to defer evaluation of IF
’s second argument until we’re sure we need it. Evaluation of
any expression in Ruby can be deferred by wrapping it in a proc, but wrapping an arbitrary
Ruby value in a proc will generally change its meaning (e.g., the result of 1 + 2
does not equal -> { 1 + 2
}
), so we might need to be more clever.
Fortunately we don’t, because this is a special case: we know that
the result of calling MOD
will be a
single-argument proc, because all of our values are
single-argument procs, and we already know (from Equality) that wrapping any proc p
with another proc that takes the same
arguments as p
and immediately calls
p
with them will produce a value that
is indistinguishable from just p
, so
we can use that trick here to defer the recursive call without affecting
the meaning of the value being passed into IF
:
MOD
=
->
m
{
->
n
{
IF
[
IS_LESS_OR_EQUAL
[
n
][
m
]][
->
x
{
MOD
[
SUBTRACT
[
m
][
n
]][
n
]
[
x
]
}
][
m
]
}
}
This wraps the recursive MOD
call in -> x { …[x] }
to defer it;
Ruby now won’t try to evaluate the body of that proc when it calls
IF
, but if the proc gets chosen by
IF
and returned as the result, it can
be called by its recipient to finally trigger the (now definitely
required) recursive call to MOD
.
Does MOD
work
now?
>>
to_integer
(
MOD
[
THREE
][
TWO
]
)
=> 1
>>
to_integer
(
MOD
[
POWER
[
THREE
][
THREE
]
][
ADD
[
THREE
][
TWO
]
]
)
=> 2
Yes! Hooray!
But don’t celebrate yet, because there’s another, more insidious
problem: we are defining the constant MOD
in terms of the constant
MOD
, so this definition is
not just an innocent abbreviation. This time we’re
not merely assigning a complex proc to a constant in order to reuse it
later; in fact, we’re relying on Ruby’s assignment semantics in order to
assume that, even though MOD
has
obviously not yet been defined while we’re still defining it, we can
nonetheless refer to it in MOD
’s
implementation and expect it to have become defined
by the time we evaluate it later.
That’s cheating, because in principle, we should be able to undo all of the
abbreviations—“where we said MOD
, what we actually meant
was this long proc”—but that’s impossible as long as MOD
is defined in terms of itself.
We can solve this problem with the Y combinator, a famous piece of helper code designed for exactly this purpose: defining a recursive function without cheating. Here’s what it looks like:
Y
=
->
f
{
->
x
{
f
[
x
[
x
]]
}
[->
x
{
f
[
x
[
x
]]
}
]
}
The Y combinator is hard to explain accurately without lots of detail, but here’s a (technically inaccurate) sketch: when we call the Y combinator with a proc, it will call that proc with the proc itself as the first argument. So, if we write a proc that expects an argument and then call the Y combinator with that proc, then the proc will get itself as that argument and therefore can use that argument whenever it wants to call itself.
Sadly, for the same reason that MOD
was looping forever, the Y combinator will
loop forever in Ruby too, so we need a modified version. It’s the
expression x[x]
that causes the
problem, and we can again fix the problem by wrapping the occurrences of
that expression in inert -> y { …[y]
}
procs to defer their evaluation:
Z
=
->
f
{
->
x
{
f
[
->
y
{
x
[
x
]
[
y
]
}
]
}
[->
x
{
f
[
->
y
{
x
[
x
]
[
y
]
}
]
}
]
}
This is the Z combinator, which is just the Y combinator adapted for strict languages like Ruby.
We can now finally make a satisfactory implementation of MOD
by giving it an extra argument, f
, wrapping a call to the Z combinator around
it, and calling f
where we used to
call MOD
:
MOD
=
Z
[->
f
{
->
m
{
->
n
{
IF
[
IS_LESS_OR_EQUAL
[
n
][
m
]][
->
x
{
f
[
SUBTRACT
[
m
][
n
]][
n
][
x
]
}
][
m
]
}
}
}
]
Thankfully this noncheating version of MOD
still
works:
>>
to_integer
(
MOD
[
THREE
][
TWO
]
)
=> 1
>>
to_integer
(
MOD
[
POWER
[
THREE
][
THREE
]
][
ADD
[
THREE
][
TWO
]
]
)
=> 2
Now we can replace all of the occurrences of %
in the FizzBuzz program with calls to MOD
:
(
ONE
.
.
HUNDRED
)
.
map
do
|
n
|
IF
[
IS_ZERO
[
MOD
[
n
][
FIFTEEN
]
]][
'FizzBuzz'
][
IF
[
IS_ZERO
[
MOD
[
n
][
THREE
]
]][
'Fizz'
][
IF
[
IS_ZERO
[
MOD
[
n
][
FIVE
]
]][
'Buzz'
][
n
.
to_s
]]]
end
We only have a few Ruby features left to reimplement for FizzBuzz: the range, the #map
, the string literals, and the Fixnum#to_s
. We’ve seen lots of detail for the other values and operations
we’ve implemented, so we’ll go through the remaining ones quickly and in as little detail as
possible. (Don’t worry about understanding everything; we’ll just be getting a
flavor.)
To be able to implement ranges and #map
, we need an
implementation of lists, and the easiest way to build lists is to use pairs. The
implementation works like a linked list, where each pair stores a value and a pointer to the
next pair in the list; in this case, we use nested pairs instead of pointers. The standard
list operations look like this:
EMPTY
=
PAIR
[
TRUE
][
TRUE
]
UNSHIFT
=
->
l
{
->
x
{
PAIR
[
FALSE
][
PAIR
[
x
][
l
]]
}
}
IS_EMPTY
=
LEFT
FIRST
=
->
l
{
LEFT
[
RIGHT
[
l
]]
}
REST
=
->
l
{
RIGHT
[
RIGHT
[
l
]]
}
And they work like this:
>>
my_list
=
UNSHIFT
[
UNSHIFT
[
UNSHIFT
[
EMPTY
][
THREE
]
][
TWO
]
][
ONE
]
=> #<Proc (lambda)>
>>
to_integer
(
FIRST
[
my_list
]
)
=> 1
>>
to_integer
(
FIRST
[
REST
[
my_list
]]
)
=> 2
>>
to_integer
(
FIRST
[
REST
[
REST
[
my_list
]]]
)
=> 3
>>
to_boolean
(
IS_EMPTY
[
my_list
]
)
=> false
>>
to_boolean
(
IS_EMPTY
[
EMPTY
]
)
=> true
Using FIRST
and REST
to pull out individual elements of lists is quite clumsy, so as with
numbers and Booleans we can write a #to_array
method to
help us on the console:
def
to_array
(
proc
)
array
=
[]
until
to_boolean
(
IS_EMPTY
[
proc
]
)
array
.
push
(
FIRST
[
proc
]
)
proc
=
REST
[
proc
]
end
array
end
This makes it easier to inspect lists:
>>
to_array
(
my_list
)
=> [#<Proc (lambda)>, #<Proc (lambda)>, #<Proc (lambda)>]
>>
to_array
(
my_list
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [1, 2, 3]
How can we implement ranges? In fact, instead of finding a way to explicitly represent ranges as procs, let’s just write a proc that can build a list of all the elements in a range. For native Ruby numbers and “lists” (i.e., arrays), we can write it like this:
def
range
(
m
,
n
)
if
m
<=
n
range
(
m
+
1
,
n
)
.
unshift
(
m
)
else
[]
end
end
This algorithm is slightly contrived in anticipation of the available list operations,
but it makes sense: the list of all the numbers from m
to
n
is the same as the list of all the numbers from
m + 1
to n
with
m
unshifted onto the front; if m
is greater than n
, then the list of
numbers is empty.
Happily, we already have everything we need to translate this method directly into procs:
RANGE
=
Z
[->
f
{
->
m
{
->
n
{
IF
[
IS_LESS_OR_EQUAL
[
m
][
n
]][
->
x
{
UNSHIFT
[
f
[
INCREMENT
[
m
]][
n
]][
m
][
x
]
}
][
EMPTY
]
}
}
}
]
Note the use of the Z combinator for recursion, and a deferring
-> x { …[x] }
proc around the
TRUE
branch of the
conditional.
Does this work?
>>
my_range
=
RANGE
[
ONE
][
FIVE
]
=> #<Proc (lambda)>
>>
to_array
(
my_range
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [1, 2, 3, 4, 5]
Yes, so let’s use it in FizzBuzz:
RANGE
[
ONE
][
HUNDRED
]
.
map
do
|
n
|
IF
[
IS_ZERO
[
MOD
[
n
][
FIFTEEN
]]][
'FizzBuzz'
][
IF
[
IS_ZERO
[
MOD
[
n
][
THREE
]]][
'Fizz'
][
IF
[
IS_ZERO
[
MOD
[
n
][
FIVE
]]][
'Buzz'
][
n
.
to_s
]]]
end
To implement #map
, we can use a helper called
FOLD
, which is a bit like Ruby’s Enumerable#inject
:
FOLD
=
Z
[->
f
{
->
l
{
->
x
{
->
g
{
IF
[
IS_EMPTY
[
l
]][
x
][
->
y
{
g
[
f
[
REST
[
l
]][
x
][
g
]][
FIRST
[
l
]][
y
]
}
]
}
}
}
}
]
FOLD
makes it easier to write
procs that process every item in a list:
>>
to_integer
(
FOLD
[
RANGE
[
ONE
][
FIVE
]][
ZERO
][
ADD
]
)
=> 15
>>
to_integer
(
FOLD
[
RANGE
[
ONE
][
FIVE
]][
ONE
][
MULTIPLY
]
)
=> 120
Once we have FOLD
, we can write
MAP
concisely:
MAP
=
->
k
{
->
f
{
FOLD
[
k
][
EMPTY
][
->
l
{
->
x
{
UNSHIFT
[
l
][
f
[
x
]]
}
}
]
}
}
Does MAP
work?
>>
my_list
=
MAP
[
RANGE
[
ONE
][
FIVE
]][
INCREMENT
]
=> #<Proc (lambda)>
>>
to_array
(
my_list
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [2, 3, 4, 5, 6]
Yes. So we can replace #map
in
FizzBuzz:
MAP
[
RANGE
[
ONE
][
HUNDRED
]
][->
n
{
IF
[
IS_ZERO
[
MOD
[
n
][
FIFTEEN
]]][
'FizzBuzz'
][
IF
[
IS_ZERO
[
MOD
[
n
][
THREE
]]][
'Fizz'
][
IF
[
IS_ZERO
[
MOD
[
n
][
FIVE
]]][
'Buzz'
][
n
.
to_s
]]]
}
]
Nearly finished! All that remains is to deal with the strings.
Strings are easy to handle: we can just represent them as lists of numbers, as long as we agree on an encoding that determines which number represents which character.
We can choose any encoding we want, so instead of using a general-purpose one like
ASCII, let’s design a new one that’s more convenient for FizzBuzz. We only need to encode
digits and the strings 'FizzBuzz'
, 'Fizz'
, and 'Buzz'
, so we can
use the numbers 0
to 9
to represent the characters '0'
to '9'
, and the numbers from 10
to 14
to encode the characters 'B'
, 'F'
, 'i'
, 'u'
, and 'z'
.
This already gives us a way to represent the string literals we need (being careful not to clobber the Z combinator):
TEN
=
MULTIPLY
[
TWO
][
FIVE
]
B
=
TEN
F
=
INCREMENT
[
B
]
I
=
INCREMENT
[
F
]
U
=
INCREMENT
[
I
]
ZED
=
INCREMENT
[
U
]
FIZZ
=
UNSHIFT
[
UNSHIFT
[
UNSHIFT
[
UNSHIFT
[
EMPTY
][
ZED
]][
ZED
]][
I
]][
F
]
BUZZ
=
UNSHIFT
[
UNSHIFT
[
UNSHIFT
[
UNSHIFT
[
EMPTY
][
ZED
]][
ZED
]][
U
]][
B
]
FIZZBUZZ
=
UNSHIFT
[
UNSHIFT
[
UNSHIFT
[
UNSHIFT
[
BUZZ
][
ZED
]][
ZED
]][
I
]][
F
]
To check that these work, we can write some external methods to convert them into Ruby strings:
def
to_char
(
c
)
'0123456789BFiuz'
.
slice
(
to_integer
(
c
))
end
def
to_string
(
s
)
to_array
(
s
)
.
map
{
|
c
|
to_char
(
c
)
}
.
join
end
Alright, do the strings work?
>>
to_char
(
ZED
)
=> "z"
>>
to_string
(
FIZZBUZZ
)
=> "FizzBuzz"
Great. So we can use them in FizzBuzz:
MAP
[
RANGE
[
ONE
][
HUNDRED
]][->
n
{
IF
[
IS_ZERO
[
MOD
[
n
][
FIFTEEN
]]][
FIZZBUZZ
][
IF
[
IS_ZERO
[
MOD
[
n
][
THREE
]]][
FIZZ
][
IF
[
IS_ZERO
[
MOD
[
n
][
FIVE
]]][
BUZZ
][
n
.
to_s
]]]
}
]
The very last thing to implement is Fixnum#to_s
. For
that, we need to be able to split a number into its component digits, and here’s one way to
do that in Ruby:
def
to_digits
(
n
)
previous_digits
=
if
n
<
10
[]
else
to_digits
(
n
/
10
)
end
previous_digits
.
push
(
n
%
10
)
end
We haven’t implemented <
, but we can dodge that
problem by using n <= 9
instead of n < 10
. Unfortunately, we can’t dodge implementing Fixnum#/
and
Array#push
, so here they are:
DIV
=
Z
[->
f
{
->
m
{
->
n
{
IF
[
IS_LESS_OR_EQUAL
[
n
][
m
]][
->
x
{
INCREMENT
[
f
[
SUBTRACT
[
m
][
n
]][
n
]][
x
]
}
][
ZERO
]
}
}
}
]
PUSH
=
->
l
{
->
x
{
FOLD
[
l
][
UNSHIFT
[
EMPTY
][
x
]][
UNSHIFT
]
}
}
Now we can translate #to_digits
into a proc:
TO_DIGITS
=
Z
[->
f
{
->
n
{
PUSH
[
IF
[
IS_LESS_OR_EQUAL
[
n
][
DECREMENT
[
TEN
]]][
EMPTY
][
->
x
{
f
[
DIV
[
n
][
TEN
]][
x
]
}
]
][
MOD
[
n
][
TEN
]]
}
}
]
Does it work?
>>
to_array
(
TO_DIGITS
[
FIVE
]
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [5]
>>
to_array
(
TO_DIGITS
[
POWER
[
FIVE
][
THREE
]]
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [1, 2, 5]
Yes. And because we had the foresight to design a string encoding
where 1
represents '1'
and so on, the arrays produced by TO_DIGITS
are already valid strings:
>>
to_string
(
TO_DIGITS
[
FIVE
]
)
=> "5"
>>
to_string
(
TO_DIGITS
[
POWER
[
FIVE
][
THREE
]]
)
=> "125"
So we can replace #to_s
with
TO_DIGITS
in FizzBuzz:
MAP
[
RANGE
[
ONE
][
HUNDRED
]][->
n
{
IF
[
IS_ZERO
[
MOD
[
n
][
FIFTEEN
]]][
FIZZBUZZ
][
IF
[
IS_ZERO
[
MOD
[
n
][
THREE
]]][
FIZZ
][
IF
[
IS_ZERO
[
MOD
[
n
][
FIVE
]]][
BUZZ
][
TO_DIGITS
[
n
]
]]]
}
]
We’ve finally finished! (This would’ve been the longest, most awkward job interview ever.) We now have an implementation of FizzBuzz written entirely with procs. Let’s run it to make sure it works properly:
>>
solution
=
MAP
[
RANGE
[
ONE
][
HUNDRED
]][->
n
{
IF
[
IS_ZERO
[
MOD
[
n
][
FIFTEEN
]]][
FIZZBUZZ
][
IF
[
IS_ZERO
[
MOD
[
n
][
THREE
]]][
FIZZ
][
IF
[
IS_ZERO
[
MOD
[
n
][
FIVE
]]][
BUZZ
][
TO_DIGITS
[
n
]
]]]
}
]
=> #<Proc (lambda)>
>>
to_array
(
solution
)
.
each
do
|
p
|
puts
to_string
(
p
)
end
;
nil
1
2
Fizz
4
Buzz
Fizz
7
⋮94
Buzz
Fizz
97
98
Fizz
Buzz
=> nil
Having gone to so much trouble to make sure that every constant is just an abbreviation of some longer expression, we owe it to ourselves to replace each constant with its definition so we can see the complete program:
-> k { -> f { -> f { -> x {
f[-> y { x[x][y] }] }[-> x { f[-> y { x[x][y] }] }] }[-> f {
-> l { -> x { -> g { -> b { b }[-> p { p[-> x { ->
y { x } }] }[l]][x][-> y { g[f[-> l { -> p { p[-> x { ->
y { y } }] }[-> p { p[-> x { -> y { y } }] }[l]]
}[l]][x][g]][-> l { -> p { p[-> x { -> y { x } }] }[-> p
{ p[-> x { -> y { y } }] }[l]] }[l]][y] }] } } } }][k][-> x {
-> y { -> f { f[x][y] } } }[-> x { -> y { x } }][-> x {
-> y { x } }]][-> l { -> x { -> l { -> x { -> x {
-> y { -> f { f[x][y] } } }[-> x { -> y { y } }][-> x {
-> y { -> f { f[x][y] } } }[x][l]] } }[l][f[x]] } }] } }[-> f {
-> x { f[-> y { x[x][y] }] }[-> x { f[-> y { x[x][y] }] }]
}[-> f { -> m { -> n { -> b { b }[-> m { -> n { ->
n { n[-> x { -> x { -> y { y } } }][-> x { -> y { x } }]
}[-> m { -> n { n[-> n { -> p { p[-> x { -> y { x } }]
}[n[-> p { -> x { -> y { -> f { f[x][y] } } }[-> p {
p[-> x { -> y { y } }] }[p]][-> n { -> p { -> x {
p[n[p][x]] } } }[-> p { p[-> x { -> y { y } }] }[p]]] }][->
x { -> y { -> f { f[x][y] } } }[-> p { -> x { x } }][-> p
{ -> x { x } }]]] }][m] } }[m][n]] } }[m][n]][-> x { -> l {
-> x { -> x { -> y { -> f { f[x][y] } } }[-> x { -> y
{ y } }][-> x { -> y { -> f { f[x][y] } } }[x][l]] } }[f[->
n { -> p { -> x { p[n[p][x]] } } }[m]][n]][m][x] }][-> x {
-> y { -> f { f[x][y] } } }[-> x { -> y { x } }][-> x {
-> y { x } }]] } } }][-> p { -> x { p[x] } }][-> p { -> x
{
p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[x]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
} }]][-> n { -> b { b }[-> n { n[-> x { -> x { -> y {
y } } }][-> x { -> y { x } }] }[-> f { -> x { f[-> y {
x[x][y] }] }[-> x { f[-> y { x[x][y] }] }] }[-> f { -> m {
-> n { -> b { b }[-> m { -> n { -> n { n[-> x { ->
x { -> y { y } } }][-> x { -> y { x } }] }[-> m { -> n {
n[-> n { -> p { p[-> x { -> y { x } }] }[n[-> p { -> x
{ -> y { -> f { f[x][y] } } }[-> p { p[-> x { -> y { y }
}] }[p]][-> n { -> p { -> x { p[n[p][x]] } } }[-> p {
p[-> x { -> y { y } }] }[p]]] }][-> x { -> y { -> f {
f[x][y] } } }[-> p { -> x { x } }][-> p { -> x { x } }]]]
}][m] } }[m][n]] } }[n][m]][-> x { f[-> m { -> n { n[-> n {
-> p { p[-> x { -> y { x } }] }[n[-> p { -> x { -> y {
-> f { f[x][y] } } }[-> p { p[-> x { -> y { y } }]
}[p]][-> n { -> p { -> x { p[n[p][x]] } } }[-> p { p[-> x
{ -> y { y } }] }[p]]] }][-> x { -> y { -> f { f[x][y] } }
}[-> p { -> x { x } }][-> p { -> x { x } }]]] }][m] }
}[m][n]][n][x] }][m] } } }][n][-> p { -> x {
p[p[p[p[p[p[p[p[p[p[p[p[p[p[p[x]]]]]]]]]]]]]]] } }]]][-> l { -> x
{ -> x { -> y { -> f { f[x][y] } } }[-> x { -> y { y }
}][-> x { -> y { -> f { f[x][y] } } }[x][l]] } }[-> l {
-> x { -> x { -> y { -> f { f[x][y] } } }[-> x { -> y
{ y } }][-> x { -> y { -> f { f[x][y] } } }[x][l]] } }[-> l
{ -> x { -> x { -> y { -> f { f[x][y] } } }[-> x { ->
y { y } }][-> x { -> y { -> f { f[x][y] } } }[x][l]] } }[->
l { -> x { -> x { -> y { -> f { f[x][y] } } }[-> x {
-> y { y } }][-> x { -> y { -> f { f[x][y] } } }[x][l]] }
}[-> l { -> x { -> x { -> y { -> f { f[x][y] } } }[->
x { -> y { y } }][-> x { -> y { -> f { f[x][y] } } }[x][l]]
} }[-> l { -> x { -> x { -> y { -> f { f[x][y] } }
}[-> x { -> y { y } }][-> x { -> y { -> f { f[x][y] } }
}[x][l]] } }[-> l { -> x { -> x { -> y { -> f { f[x][y] }
} }[-> x { -> y { y } }][-> x { -> y { -> f { f[x][y] } }
}[x][l]] } }[-> l { -> x { -> x { -> y { -> f { f[x][y] }
} }[-> x { -> y { y } }][-> x { -> y { -> f { f[x][y] } }
}[x][l]] } }[-> x { -> y { -> f { f[x][y] } } }[-> x { ->
y { x } }][-> x { -> y { x } }]][-> n { -> p { -> x {
p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]] } } }[-> n
{ -> p { -> x { p[n[p][x]] } } }[-> n { -> p { -> x {
p[n[p][x]] } } }[-> m { -> n { n[-> m { -> n { n[-> n {
-> p { -> x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x }
}] } }[-> p { -> x { p[p[x]] } }][-> p { -> x {
p[p[p[p[p[x]]]]] } }]]]]]]][-> n { -> p { -> x { p[n[p][x]] } }
}[-> n { -> p { -> x { p[n[p][x]] } } }[-> n { -> p {
-> x { p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]] } }
}[-> m { -> n { n[-> m { -> n { n[-> n { -> p { ->
x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x } }] } }[-> p
{ -> x { p[p[x]] } }][-> p { -> x { p[p[p[p[p[x]]]]] }
}]]]]]]][-> n { -> p { -> x { p[n[p][x]] } } }[-> n { ->
p { -> x { p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]]
} } }[-> m { -> n { n[-> m { -> n { n[-> n { -> p {
-> x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x } }] }
}[-> p { -> x { p[p[x]] } }][-> p { -> x { p[p[p[p[p[x]]]]]
} }]]]]]][-> m { -> n { n[-> m { -> n { n[-> n { -> p
{ -> x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x } }] }
}[-> p { -> x { p[p[x]] } }][-> p { -> x { p[p[p[p[p[x]]]]]
} }]]][-> n { -> p { -> x { p[n[p][x]] } } }[-> n { -> p
{ -> x { p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]] }
} }[-> n { -> p { -> x { p[n[p][x]] } } }[-> m { -> n {
n[-> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } }
}][m] } }[m]][-> p { -> x { x } }] } }[-> p { -> x { p[p[x]]
} }][-> p { -> x { p[p[p[p[p[x]]]]] } }]]]]]]][-> n { -> p {
-> x { p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]] } }
}[-> n { -> p { -> x { p[n[p][x]] } } }[-> n { -> p {
-> x { p[n[p][x]] } } }[-> m { -> n { n[-> m { -> n {
n[-> n { -> p { -> x { p[n[p][x]] } } }][m] } }[m]][-> p {
-> x { x } }] } }[-> p { -> x { p[p[x]] } }][-> p { -> x
{ p[p[p[p[p[x]]]]] } }]]]]]]][-> n { -> p { -> x { p[n[p][x]] }
} }[-> n { -> p { -> x { p[n[p][x]] } } }[-> m { -> n {
n[-> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } }
}][m] } }[m]][-> p { -> x { x } }] } }[-> p { -> x { p[p[x]]
} }][-> p { -> x { p[p[p[p[p[x]]]]] } }]]]]][-> n { -> p {
-> x { p[n[p][x]] } } }[-> m { -> n { n[-> m { -> n {
n[-> n { -> p { -> x { p[n[p][x]] } } }][m] } }[m]][-> p {
-> x { x } }] } }[-> p { -> x { p[p[x]] } }][-> p { -> x
{ p[p[p[p[p[x]]]]] } }]]]][-> b { b }[-> n { n[-> x { -> x {
-> y { y } } }][-> x { -> y { x } }] }[-> f { -> x {
f[-> y { x[x][y] }] }[-> x { f[-> y { x[x][y] }] }] }[-> f {
-> m { -> n { -> b { b }[-> m { -> n { -> n { n[->
x { -> x { -> y { y } } }][-> x { -> y { x } }] }[-> m {
-> n { n[-> n { -> p { p[-> x { -> y { x } }] }[n[-> p
{ -> x { -> y { -> f { f[x][y] } } }[-> p { p[-> x {
-> y { y } }] }[p]][-> n { -> p { -> x { p[n[p][x]] } }
}[-> p { p[-> x { -> y { y } }] }[p]]] }][-> x { -> y {
-> f { f[x][y] } } }[-> p { -> x { x } }][-> p { -> x { x
} }]]] }][m] } }[m][n]] } }[n][m]][-> x { f[-> m { -> n {
n[-> n { -> p { p[-> x { -> y { x } }] }[n[-> p { -> x
{ -> y { -> f { f[x][y] } } }[-> p { p[-> x { -> y { y }
}] }[p]][-> n { -> p { -> x { p[n[p][x]] } } }[-> p {
p[-> x { -> y { y } }] }[p]]] }][-> x { -> y { -> f {
f[x][y] } } }[-> p { -> x { x } }][-> p { -> x { x } }]]]
}][m] } }[m][n]][n][x] }][m] } } }][n][-> p { -> x { p[p[p[x]]] }
}]]][-> l { -> x { -> x { -> y { -> f { f[x][y] } }
}[-> x { -> y { y } }][-> x { -> y { -> f { f[x][y] } }
}[x][l]] } }[-> l { -> x { -> x { -> y { -> f { f[x][y] }
} }[-> x { -> y { y } }][-> x { -> y { -> f { f[x][y] } }
}[x][l]] } }[-> l { -> x { -> x { -> y { -> f { f[x][y] }
} }[-> x { -> y { y } }][-> x { -> y { -> f { f[x][y] } }
}[x][l]] } }[-> l { -> x { -> x { -> y { -> f { f[x][y] }
} }[-> x { -> y { y } }][-> x { -> y { -> f { f[x][y] } }
}[x][l]] } }[-> x { -> y { -> f { f[x][y] } } }[-> x { ->
y { x } }][-> x { -> y { x } }]][-> n { -> p { -> x {
p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]] } } }[-> n
{ -> p { -> x { p[n[p][x]] } } }[-> n { -> p { -> x {
p[n[p][x]] } } }[-> m { -> n { n[-> m { -> n { n[-> n {
-> p { -> x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x }
}] } }[-> p { -> x { p[p[x]] } }][-> p { -> x {
p[p[p[p[p[x]]]]] } }]]]]]]][-> n { -> p { -> x { p[n[p][x]] } }
}[-> n { -> p { -> x { p[n[p][x]] } } }[-> n { -> p {
-> x { p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]] } }
}[-> m { -> n { n[-> m { -> n { n[-> n { -> p { ->
x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x } }] } }[-> p
{ -> x { p[p[x]] } }][-> p { -> x { p[p[p[p[p[x]]]]] }
}]]]]]]][-> n { -> p { -> x { p[n[p][x]] } } }[-> n { ->
p { -> x { p[n[p][x]] } } }[-> m { -> n { n[-> m { -> n {
n[-> n { -> p { -> x { p[n[p][x]] } } }][m] } }[m]][-> p {
-> x { x } }] } }[-> p { -> x { p[p[x]] } }][-> p { -> x
{ p[p[p[p[p[x]]]]] } }]]]]][-> n { -> p { -> x { p[n[p][x]] } }
}[-> m { -> n { n[-> m { -> n { n[-> n { -> p { ->
x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x } }] } }[-> p
{ -> x { p[p[x]] } }][-> p { -> x { p[p[p[p[p[x]]]]] }
}]]]][-> b { b }[-> n { n[-> x { -> x { -> y { y } }
}][-> x { -> y { x } }] }[-> f { -> x { f[-> y { x[x][y]
}] }[-> x { f[-> y { x[x][y] }] }] }[-> f { -> m { -> n {
-> b { b }[-> m { -> n { -> n { n[-> x { -> x { ->
y { y } } }][-> x { -> y { x } }] }[-> m { -> n { n[-> n
{ -> p { p[-> x { -> y { x } }] }[n[-> p { -> x { -> y
{ -> f { f[x][y] } } }[-> p { p[-> x { -> y { y } }]
}[p]][-> n { -> p { -> x { p[n[p][x]] } } }[-> p { p[-> x
{ -> y { y } }] }[p]]] }][-> x { -> y { -> f { f[x][y] } }
}[-> p { -> x { x } }][-> p { -> x { x } }]]] }][m] }
}[m][n]] } }[n][m]][-> x { f[-> m { -> n { n[-> n { -> p
{ p[-> x { -> y { x } }] }[n[-> p { -> x { -> y { -> f
{ f[x][y] } } }[-> p { p[-> x { -> y { y } }] }[p]][-> n {
-> p { -> x { p[n[p][x]] } } }[-> p { p[-> x { -> y { y }
}] }[p]]] }][-> x { -> y { -> f { f[x][y] } } }[-> p { ->
x { x } }][-> p { -> x { x } }]]] }][m] } }[m][n]][n][x] }][m] } }
}][n][-> p { -> x { p[p[p[p[p[x]]]]] } }]]][-> l { -> x {
-> x { -> y { -> f { f[x][y] } } }[-> x { -> y { y }
}][-> x { -> y { -> f { f[x][y] } } }[x][l]] } }[-> l {
-> x { -> x { -> y { -> f { f[x][y] } } }[-> x { -> y
{ y } }][-> x { -> y { -> f { f[x][y] } } }[x][l]] } }[-> l
{ -> x { -> x { -> y { -> f { f[x][y] } } }[-> x { ->
y { y } }][-> x { -> y { -> f { f[x][y] } } }[x][l]] } }[->
l { -> x { -> x { -> y { -> f { f[x][y] } } }[-> x {
-> y { y } }][-> x { -> y { -> f { f[x][y] } } }[x][l]] }
}[-> x { -> y { -> f { f[x][y] } } }[-> x { -> y { x }
}][-> x { -> y { x } }]][-> n { -> p { -> x { p[n[p][x]]
} } }[-> n { -> p { -> x { p[n[p][x]] } } }[-> n { -> p {
-> x { p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]] } }
}[-> m { -> n { n[-> m { -> n { n[-> n { -> p { ->
x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x } }] } }[-> p
{ -> x { p[p[x]] } }][-> p { -> x { p[p[p[p[p[x]]]]] }
}]]]]]]][-> n { -> p { -> x { p[n[p][x]] } } }[-> n { ->
p { -> x { p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]]
} } }[-> n { -> p { -> x { p[n[p][x]] } } }[-> m { -> n {
n[-> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } }
}][m] } }[m]][-> p { -> x { x } }] } }[-> p { -> x { p[p[x]]
} }][-> p { -> x { p[p[p[p[p[x]]]]] } }]]]]]]][-> n { -> p {
-> x { p[n[p][x]] } } }[-> n { -> p { -> x { p[n[p][x]] } }
}[-> n { -> p { -> x { p[n[p][x]] } } }[-> m { -> n {
n[-> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } }
}][m] } }[m]][-> p { -> x { x } }] } }[-> p { -> x { p[p[x]]
} }][-> p { -> x { p[p[p[p[p[x]]]]] } }]]]]]][-> m { -> n {
n[-> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } }
}][m] } }[m]][-> p { -> x { x } }] } }[-> p { -> x { p[p[x]]
} }][-> p { -> x { p[p[p[p[p[x]]]]] } }]]][-> f { -> x {
f[-> y { x[x][y] }] }[-> x { f[-> y { x[x][y] }] }] }[-> f {
-> n { -> l { -> x { -> f { -> x { f[-> y { x[x][y] }]
}[-> x { f[-> y { x[x][y] }] }] }[-> f { -> l { -> x {
-> g { -> b { b }[-> p { p[-> x { -> y { x } }]
}[l]][x][-> y { g[f[-> l { -> p { p[-> x { -> y { y } }]
}[-> p { p[-> x { -> y { y } }] }[l]] }[l]][x][g]][-> l {
-> p { p[-> x { -> y { y } }] }[-> p { p[-> x { -> y {
y } }] }[l]] }[l]][y] }] } } } }][l][-> l { -> x { -> x { ->
y { -> f { f[x][y] } } }[-> x { -> y { y } }][-> x { -> y
{ -> f { f[x][y] } } }[x][l]] } }[-> x { -> y { -> f {
f[x][y] } } }[-> x { -> y { x } }][-> x { -> y { x }
}]][x]][-> l { -> x { -> x { -> y { -> f { f[x][y] } }
}[-> x { -> y { y } }][-> x { -> y { -> f { f[x][y] } }
}[x][l]] } }] } }[-> b { b }[-> m { -> n { -> n { n[-> x
{ -> x { -> y { y } } }][-> x { -> y { x } }] }[-> m {
-> n { n[-> n { -> p { p[-> x { -> y { x } }] }[n[-> p
{ -> x { -> y { -> f { f[x][y] } } }[-> p { p[-> x {
-> y { y } }] }[p]][-> n { -> p { -> x { p[n[p][x]] } }
}[-> p { p[-> x { -> y { y } }] }[p]]] }][-> x { -> y {
-> f { f[x][y] } } }[-> p { -> x { x } }][-> p { -> x { x
} }]]] }][m] } }[m][n]] } }[n][-> n { -> p { p[-> x { -> y {
x } }] }[n[-> p { -> x { -> y { -> f { f[x][y] } } }[-> p
{ p[-> x { -> y { y } }] }[p]][-> n { -> p { -> x {
p[n[p][x]] } } }[-> p { p[-> x { -> y { y } }] }[p]]] }][->
x { -> y { -> f { f[x][y] } } }[-> p { -> x { x } }][-> p
{ -> x { x } }]]] }[-> m { -> n { n[-> m { -> n { n[->
n { -> p { -> x { p[n[p][x]] } } }][m] } }[m]][-> p { -> x {
x } }] } }[-> p { -> x { p[p[x]] } }][-> p { -> x {
p[p[p[p[p[x]]]]] } }]]]][-> x { -> y { -> f { f[x][y] } }
}[-> x { -> y { x } }][-> x { -> y { x } }]][-> x {
f[-> f { -> x { f[-> y { x[x][y] }] }[-> x { f[-> y {
x[x][y] }] }] }[-> f { -> m { -> n { -> b { b }[-> m {
-> n { -> n { n[-> x { -> x { -> y { y } } }][-> x {
-> y { x } }] }[-> m { -> n { n[-> n { -> p { p[-> x {
-> y { x } }] }[n[-> p { -> x { -> y { -> f { f[x][y] } }
}[-> p { p[-> x { -> y { y } }] }[p]][-> n { -> p { ->
x { p[n[p][x]] } } }[-> p { p[-> x { -> y { y } }] }[p]]]
}][-> x { -> y { -> f { f[x][y] } } }[-> p { -> x { x }
}][-> p { -> x { x } }]]] }][m] } }[m][n]] } }[n][m]][-> x {
-> n { -> p { -> x { p[n[p][x]] } } }[f[-> m { -> n {
n[-> n { -> p { p[-> x { -> y { x } }] }[n[-> p { -> x
{ -> y { -> f { f[x][y] } } }[-> p { p[-> x { -> y { y }
}] }[p]][-> n { -> p { -> x { p[n[p][x]] } } }[-> p {
p[-> x { -> y { y } }] }[p]]] }][-> x { -> y { -> f {
f[x][y] } } }[-> p { -> x { x } }][-> p { -> x { x } }]]]
}][m] } }[m][n]][n]][x] }][-> p { -> x { x } }] } } }][n][-> m
{ -> n { n[-> m { -> n { n[-> n { -> p { -> x {
p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x } }] } }[-> p {
-> x { p[p[x]] } }][-> p { -> x { p[p[p[p[p[x]]]]] } }]]][x]
}]][-> f { -> x { f[-> y { x[x][y] }] }[-> x { f[-> y {
x[x][y] }] }] }[-> f { -> m { -> n { -> b { b }[-> m {
-> n { -> n { n[-> x { -> x { -> y { y } } }][-> x {
-> y { x } }] }[-> m { -> n { n[-> n { -> p { p[-> x {
-> y { x } }] }[n[-> p { -> x { -> y { -> f { f[x][y] } }
}[-> p { p[-> x { -> y { y } }] }[p]][-> n { -> p { ->
x { p[n[p][x]] } } }[-> p { p[-> x { -> y { y } }] }[p]]]
}][-> x { -> y { -> f { f[x][y] } } }[-> p { -> x { x }
}][-> p { -> x { x } }]]] }][m] } }[m][n]] } }[n][m]][-> x {
f[-> m { -> n { n[-> n { -> p { p[-> x { -> y { x } }]
}[n[-> p { -> x { -> y { -> f { f[x][y] } } }[-> p {
p[-> x { -> y { y } }] }[p]][-> n { -> p { -> x {
p[n[p][x]] } } }[-> p { p[-> x { -> y { y } }] }[p]]] }][->
x { -> y { -> f { f[x][y] } } }[-> p { -> x { x } }][-> p
{ -> x { x } }]]] }][m] } }[m][n]][n][x] }][m] } } }][n][-> m {
-> n { n[-> m { -> n { n[-> n { -> p { -> x {
p[n[p][x]] } } }][m] } }[m]][-> p { -> x { x } }] } }[-> p {
-> x { p[p[x]] } }][-> p { -> x { p[p[p[p[p[x]]]]] } }]]] }
}][n]]]] }]
Constructing programs entirely out of procs takes a lot of effort, but we’ve seen that it’s possible to get real work done as long as we don’t mind applying a bit of ingenuity. Let’s take a quick look at a couple of other techniques for writing code in this minimal environment.
Using code to represent data has some interesting advantages. Our proc-based lists don’t
have to be static: a list is just code that does the right thing when we pass it to
FIRST
and REST
, so
we can easily implement lists that calculate their contents on the fly, also known as streams. In fact, there’s no reason why
streams even need to be finite, because the calculation only has to generate the list
contents as they’re consumed, so it can keep producing new values indefinitely.
For example, here’s how to implement an infinite stream of zeros:
ZEROS
=
Z
[->
f
{
UNSHIFT
[
f
][
ZERO
]
}
]
This is the “no cheating” version of ZEROS =
UNSHIFT[ZEROS][ZERO]
, a data structure defined in terms of itself. As
programmers, we’re generally comfortable with the idea of defining a recursive function
in terms of itself, but defining a data structure in terms of itself might seem weird
and unusual; in this setting, they’re exactly the same thing, and the Z combinator makes
both completely legitimate.
On the console, we can see that ZEROS
behaves just
like a list, albeit one with no end in sight:
>>
to_integer
(
FIRST
[
ZEROS
]
)
=> 0
>>
to_integer
(
FIRST
[
REST
[
ZEROS
]]
)
=> 0
>>
to_integer
(
FIRST
[
REST
[
REST
[
REST
[
REST
[
REST
[
ZEROS
]]]]]]
)
=> 0
A helper method to turn this stream into a Ruby array would be
convenient, but to_array
will run
forever unless we explicitly stop the conversion process. An optional
“maximum size” argument does the trick:
def
to_array
(
l
,
count
=
nil
)
array
=
[]
until
to_boolean
(
IS_EMPTY
[
l
]
)
||
count
==
0
array
.
push
(
FIRST
[
l
]
)
l
=
REST
[
l
]
count
=
count
-
1
unless
count
.
nil?
end
array
end
This lets us retrieve any number of elements from the stream and turn them into an array:
>>
to_array
(
ZEROS
,
5
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [0, 0, 0, 0, 0]
>>
to_array
(
ZEROS
,
10
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>
to_array
(
ZEROS
,
20
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
ZEROS
doesn’t calculate a new element each time,
but that’s easy enough to do. Here’s a stream that counts upward from a given
number:
>>
UPWARDS_OF
=
Z
[->
f
{
->
n
{
UNSHIFT
[->
x
{
f
[
INCREMENT
[
n
]][
x
]
}
][
n
]
}
}
]
=> #<Proc (lambda)>
>>
to_array
(
UPWARDS_OF
[
ZERO
]
,
5
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [0, 1, 2, 3, 4]
>>
to_array
(
UPWARDS_OF
[
FIFTEEN
]
,
20
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]
A more elaborate stream contains all the multiples of a given number:
>>
MULTIPLES_OF
=
->
m
{
Z
[->
f
{
->
n
{
UNSHIFT
[->
x
{
f
[
ADD
[
m
][
n
]][
x
]
}
][
n
]
}
}
][
m
]
}
=> #<Proc (lambda)>
>>
to_array
(
MULTIPLES_OF
[
TWO
]
,
10
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
>>
to_array
(
MULTIPLES_OF
[
FIVE
]
,
20
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100]
Remarkably, we can manipulate these infinite streams like any other list. For example, we can make a new stream by mapping a proc over an existing one:
>>
to_array
(
MULTIPLES_OF
[
THREE
]
,
10
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [3, 6, 9, 12, 15, 18, 21, 24, 27, 30]
>>
to_array
(
MAP
[
MULTIPLES_OF
[
THREE
]][
INCREMENT
]
,
10
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [4, 7, 10, 13, 16, 19, 22, 25, 28, 31]
>>
to_array
(
MAP
[
MULTIPLES_OF
[
THREE
]][
MULTIPLY
[
TWO
]]
,
10
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [6, 12, 18, 24, 30, 36, 42, 48, 54, 60]
We can even write procs that combine two streams to make a third:
>>
MULTIPLY_STREAMS
=
Z
[->
f
{
->
k
{
->
l
{
UNSHIFT
[->
x
{
f
[
REST
[
k
]][
REST
[
l
]][
x
]
}
][
MULTIPLY
[
FIRST
[
k
]][
FIRST
[
l
]]]
}
}
}
]
=> #<Proc (lambda)>
>>
to_array
(
MULTIPLY_STREAMS
[
UPWARDS_OF
[
ONE
]][
MULTIPLES_OF
[
THREE
]]
,
10
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [3, 12, 27, 48, 75, 108, 147, 192, 243, 300]
Since the contents of a stream can be generated by any computation, there’s nothing to stop us creating an infinite list of the Fibonacci series, or the prime numbers, or all possible strings in alphabetical order, or anything else computable. This abstraction is a powerful one and doesn’t require any clever features on top of what we already have.
During the FizzBuzz exercise, we used recursive functions like MOD
and RANGE
to demonstrate the use of
the Z combinator. This is convenient, because it lets us translate from an unconstrained
recursive Ruby implementation to a proc-only one without changing the structure of the
code, but technically, we can implement these functions without the Z combinator by taking
advantage of the behavior of Church numerals.
For example, our implementation of MOD[m][n]
works by repeatedly subtracting
n
from m
as long as n
<= m
, always checking the condition to decide whether to
make the next recursive call. But we can get the same result by
blindly performing the action “subtract n
from m
if n <= m
” a fixed number of
times instead of using recursion to dynamically control the
repetition. We don’t know exactly how many times we need to repeat it,
but we do know that m
times is
definitely enough (for the worst case where n
is 1
),
and it doesn’t hurt to do it more times than necessary:
def
decrease
(
m
,
n
)
if
n
<=
m
m
-
n
else
m
end
end
>>
decrease
(
17
,
5
)
=> 12
>>
decrease
(
decrease
(
17
,
5
),
5
)
=> 7
>>
decrease
(
decrease
(
decrease
(
17
,
5
),
5
),
5
)
=> 2
>>
decrease
(
decrease
(
decrease
(
decrease
(
17
,
5
),
5
),
5
),
5
)
=> 2
>>
decrease
(
decrease
(
decrease
(
decrease
(
decrease
(
17
,
5
),
5
),
5
),
5
),
5
)
=> 2
We can therefore rewrite MOD
to make use of a proc that takes a number and either subtracts
n
from it (if it’s greater than
n
) or returns it untouched. This
proc gets called m
times on
m
itself to give the final
answer:
MOD
=
->
m
{
->
n
{
m
[->
x
{
IF
[
IS_LESS_OR_EQUAL
[
n
][
x
]][
SUBTRACT
[
x
][
n
]
][
x
]
}
][
m
]
}
}
This version of MOD
works
just as well as the recursive one:
>>
to_integer
(
MOD
[
THREE
][
TWO
]
)
=> 1
>>
to_integer
(
MOD
[
POWER
[
THREE
][
THREE
]
][
ADD
[
THREE
][
TWO
]
]
)
=> 2
Although this implementation is arguably simpler than the original, it is both harder
to read and less efficient in general, because it always performs a worst-case number of
repeated calls instead of stopping as soon as possible. It’s also not extensionally equal
to the original, because the old version of MOD
would
loop forever if we asked it to divide by ZERO
(the
condition n <= m
would never become false), whereas
this implementation just returns its first argument:
>>
to_integer
(
MOD
[
THREE
][
ZERO
]
)
=> 3
RANGE
is slightly more challenging, but we can use
a trick similar to the one that makes DECREMENT
work:
design a function that, when called n
times on some
initial argument, returns a list of n
numbers from the
desired range. As with DECREMENT
, the secret is to use
a pair to store both the resulting list and the information needed by the next
iteration:
def
countdown
(
pair
)
[
pair
.
first
.
unshift
(
pair
.
last
),
pair
.
last
-
1
]
end
>>
countdown
(
[[]
,
10
]
)
=> [[10], 9]
>>
countdown
(
countdown
(
[[]
,
10
]
))
=> [[9, 10], 8]
>>
countdown
(
countdown
(
countdown
(
[[]
,
10
]
)))
=> [[8, 9, 10], 7]
>>
countdown
(
countdown
(
countdown
(
countdown
(
[[]
,
10
]
))))
=> [[7, 8, 9, 10], 6]
This is easy to rewrite with procs:
COUNTDOWN
=
->
p
{
PAIR
[
UNSHIFT
[
LEFT
[
p
]][
RIGHT
[
p
]]][
DECREMENT
[
RIGHT
[
p
]]]
}
Now we just need to implement RANGE
so that it calls COUNTDOWN
the right number of times (the
range from m
to n
always has m - n
+ 1
elements) and unpacks the result list from the final
pair:
RANGE
=
->
m
{
->
n
{
LEFT
[
INCREMENT
[
SUBTRACT
[
n
][
m
]][
COUNTDOWN
][
PAIR
[
EMPTY
][
n
]]]
}
}
Again, this combinator-free version works just fine:
>>
to_array
(
RANGE
[
FIVE
][
TEN
]
)
.
map
{
|
p
|
to_integer
(
p
)
}
=> [5, 6, 7, 8, 9, 10]
We’re able to implement MOD
and RANGE
by performing a
predetermined number of iterations—rather than executing an
arbitrary loop that runs until its condition becomes true—because
they’re primitive recursive functions. See
Partial Recursive Functions for more about
this.
Our FizzBuzz experiment has given us a sense of how it feels to write programs in the untyped lambda calculus. The constraints forced us to implement a lot of basic functionality from scratch rather than relying on features of the language, but we did eventually manage to build all of the data structures and algorithms we needed to solve the problem we were given.
Of course, we haven’t really been writing lambda calculus programs, because we don’t have a lambda calculus interpreter; we’ve just written Ruby programs in the style of the lambda calculus to get a feel for how such a minimal language can work. But we already have all the knowledge we need to build a lambda calculus interpreter and use it to evaluate actual lambda calculus expressions, so let’s give that a try.
The untyped lambda calculus is a programming language with
only three kinds of expression: variables, function definitions, and
calls. Rather than introduce a new concrete syntax for lambda calculus
expressions, we’ll stick with the Ruby conventions—variables look like
x
, functions look like -> x { x }
, and calls look like x[y]
—and try not to get the two languages
confused.
In this context, the word calculus means a system of rules
for manipulating strings of symbols.[49] The native syntax of the lambda calculus uses the Greek letter lambda (λ) in
place of Ruby’s ->
symbol; for instance, ONE
is written as λp.λx.p
x
.
We can implement LCVariable
,
LCFunction
, and LCCall
syntax classes in the usual way:
class
LCVariable
<
Struct
.
new
(
:name
)
def
to_s
name
.
to_s
end
def
inspect
to_s
end
end
class
LCFunction
<
Struct
.
new
(
:parameter
,
:body
)
def
to_s
"->
#{
parameter
}
{
#{
body
}
}"
end
def
inspect
to_s
end
end
class
LCCall
<
Struct
.
new
(
:left
,
:right
)
def
to_s
"
#{
left
}
[
#{
right
}
]"
end
def
inspect
to_s
end
end
These classes let us build abstract syntax trees of lambda calculus expressions, just like we did with Simple in Chapter 2 and regular expressions in Chapter 3:
>>
one
=
LCFunction
.
new
(
:p
,
LCFunction
.
new
(
:x
,
LCCall
.
new
(
LCVariable
.
new
(
:p
),
LCVariable
.
new
(
:x
))
)
)
=> -> p { -> x { p[x] } }
>>
increment
=
LCFunction
.
new
(
:n
,
LCFunction
.
new
(
:p
,
LCFunction
.
new
(
:x
,
LCCall
.
new
(
LCVariable
.
new
(
:p
),
LCCall
.
new
(
LCCall
.
new
(
LCVariable
.
new
(
:n
),
LCVariable
.
new
(
:p
)),
LCVariable
.
new
(
:x
)
)
)
)
)
)
=> -> n { -> p { -> x { p[n[p][x]] } } }
>>
add
=
LCFunction
.
new
(
:m
,
LCFunction
.
new
(
:n
,
LCCall
.
new
(
LCCall
.
new
(
LCVariable
.
new
(
:n
),
increment
),
LCVariable
.
new
(
:m
))
)
)
=> -> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } } }][m] } }
Because the language has such minimal syntax, those three classes are enough to represent any lambda calculus program.
Now we’re going to give a small-step operational semantics for the lambda calculus by implementing
a #reduce
method on each syntax class. Small-step is an
attractive choice, because it allows us to see the individual steps of evaluation, which is
something we can’t easily do for Ruby expressions.
Before we can implement #reduce
, we need another
operation called #replace
, which finds occurrences of a
particular variable inside an expression and replaces them with some other
expression:
class
LCVariable
def
replace
(
name
,
replacement
)
if
self
.
name
==
name
replacement
else
self
end
end
end
class
LCFunction
def
replace
(
name
,
replacement
)
if
parameter
==
name
self
else
LCFunction
.
new
(
parameter
,
body
.
replace
(
name
,
replacement
))
end
end
end
class
LCCall
def
replace
(
name
,
replacement
)
LCCall
.
new
(
left
.
replace
(
name
,
replacement
),
right
.
replace
(
name
,
replacement
))
end
end
This works in the obvious way on variables and calls:
>>
expression
=
LCVariable
.
new
(
:x
)
=> x
>>
expression
.
replace
(
:x
,
LCFunction
.
new
(
:y
,
LCVariable
.
new
(
:y
)))
=> -> y { y }
>>
expression
.
replace
(
:z
,
LCFunction
.
new
(
:y
,
LCVariable
.
new
(
:y
)))
=> x
>>
expression
=
LCCall
.
new
(
LCCall
.
new
(
LCCall
.
new
(
LCVariable
.
new
(
:a
),
LCVariable
.
new
(
:b
)
),
LCVariable
.
new
(
:c
)
),
LCVariable
.
new
(
:b
)
)
=> a[b][c][b]
>>
expression
.
replace
(
:a
,
LCVariable
.
new
(
:x
))
=> x[b][c][b]
>>
expression
.
replace
(
:b
,
LCFunction
.
new
(
:x
,
LCVariable
.
new
(
:x
)))
=> a[-> x { x }][c][-> x { x }]
For functions, the situation is more complicated. #replace
only acts on the body of a function, and it only replaces
free variables—that is, variables that haven’t been
bound to the function by being named as its parameter:
>>
expression
=
LCFunction
.
new
(
:y
,
LCCall
.
new
(
LCVariable
.
new
(
:x
),
LCVariable
.
new
(
:y
))
)
=> -> y { x[y] }
>>
expression
.
replace
(
:x
,
LCVariable
.
new
(
:z
))
=> -> y { z[y] }
>>
expression
.
replace
(
:y
,
LCVariable
.
new
(
:z
))
=> -> y { x[y] }
This lets us replace occurrences of a variable throughout an expression without accidentally changing unrelated variables that happen to have the same name:
>>
expression
=
LCCall
.
new
(
LCCall
.
new
(
LCVariable
.
new
(
:x
),
LCVariable
.
new
(
:y
)),
LCFunction
.
new
(
:y
,
LCCall
.
new
(
LCVariable
.
new
(
:y
),
LCVariable
.
new
(
:x
)))
)
=> x[y][-> y { y[x] }]
>>
expression
.
replace
(
:x
,
LCVariable
.
new
(
:z
))
=> z[y][-> y { y[z] }]
>>
expression
.
replace
(
:y
,
LCVariable
.
new
(
:z
))
=> x[z][-> y { y[x] }]
Both occurrences of x
are
free in the original expression, so they both get replaced.
Only the first occurrence of y
is a free variable, so only that one
is replaced. The second y
is a
function parameter, not a variable, and the third y
is a variable that belongs to that
function and shouldn’t be touched.
Our simple #replace
implementation won’t work on certain inputs. It doesn’t properly
handle replacements that contain free variables:
>>
expression
=
LCFunction
.
new
(
:x
,
LCCall
.
new
(
LCVariable
.
new
(
:x
),
LCVariable
.
new
(
:y
))
)
=> -> x { x[y] }
>>
replacement
=
LCCall
.
new
(
LCVariable
.
new
(
:z
),
LCVariable
.
new
(
:x
))
=> z[x]
>>
expression
.
replace
(
:y
,
replacement
)
=> -> x { x[z[x]] }
It’s not okay to just paste z[x]
into the body of
-> x { … }
like that, because the x
in z[x]
is a free
variable and should remain free afterward, but here it gets accidentally
captured by the function parameter with the same name.[50]
We can ignore this deficiency, because we’ll only be evaluating expressions that don’t contain any free variables, so it won’t actually cause a problem, but beware that a more sophisticated implementation is needed in the general case.
The whole point of #replace
is
to give us a way to implement the semantics of function calls. In
Ruby, when a proc is called with one or more arguments, the body of
the proc gets evaluated in an environment where each argument has been
assigned to a local variable, so each use of that variable behaves
like the argument itself: in a metaphorical sense, calling the proc
-> x, y { x + y }
with the
arguments 1
and 2
produces the intermediate expression
1 + 2
, and that’s what gets
evaluated to produce the final result.
We can apply the same idea more literally in the lambda calculus by actually replacing
variables in a function’s body when we evaluate a call. To do this, we can define a
LCFunction#call
method that does the replacement and
returns the result:
class
LCFunction
def
call
(
argument
)
body
.
replace
(
parameter
,
argument
)
end
end
This lets us simulate the moment when a function gets called:
>>
function
=
LCFunction
.
new
(
:x
,
LCFunction
.
new
(
:y
,
LCCall
.
new
(
LCVariable
.
new
(
:x
),
LCVariable
.
new
(
:y
))
)
)
=> -> x { -> y { x[y] } }
>>
argument
=
LCFunction
.
new
(
:z
,
LCVariable
.
new
(
:z
))
=> -> z { z }
>>
function
.
call
(
argument
)
=> -> y { -> z { z }[y] }
Function calls are the only thing that actually
happens when a lambda calculus program is
evaluated, so now we’re ready to implement #reduce
. It’ll find a place in the
expression where a function call can occur, then use #call
to make it happen. We just need to be
able to identify which expressions are actually callable…
class
LCVariable
def
callable?
false
end
end
class
LCFunction
def
callable?
true
end
end
class
LCCall
def
callable?
false
end
end
…and then we can write #reduce
:
class
LCVariable
def
reducible?
false
end
end
class
LCFunction
def
reducible?
false
end
end
class
LCCall
def
reducible?
left
.
reducible?
||
right
.
reducible?
||
left
.
callable?
end
def
reduce
if
left
.
reducible?
LCCall
.
new
(
left
.
reduce
,
right
)
elsif
right
.
reducible?
LCCall
.
new
(
left
,
right
.
reduce
)
else
left
.
call
(
right
)
end
end
end
In this implementation, function calls are the only kind of
syntax that can be reduced. Reducing LCCall
works a bit like reducing Add
or Multiply
from Simple: if either of its subexpressions is
reducible, we reduce that; if not, we actually perform the call by
calling the left subexpression (which should be a LCFunction
) with the right one as its
argument. This strategy is known as
call-by-value evaluation—first we reduce the
argument to an irreducible value, then we perform the call.
Let’s test our implementation by using the lambda calculus to calculate one plus one:
>>
expression
=
LCCall
.
new
(
LCCall
.
new
(
add
,
one
),
one
)
=> -> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } } }][m] } }[-> p { -> x { p[x] }
}][-> p { -> x { p[x] } }]
>>
while
expression
.
reducible?
puts
expression
expression
=
expression
.
reduce
end
;
puts
expression
-> m { -> n { n[-> n { -> p { -> x { p[n[p][x]] } } }][m] } }[-> p { -> x { p[x] } }]
[-> p { -> x { p[x] } }]
-> n { n[-> n { -> p { -> x { p[n[p][x]] } } }][-> p { -> x { p[x] } }] }[-> p { -> x
{ p[x] } }]
-> p { -> x { p[x] } }[-> n { -> p { -> x { p[n[p][x]] } } }][-> p { -> x { p[x] } }]
-> x { -> n { -> p { -> x { p[n[p][x]] } } }[x] }[-> p { -> x { p[x] } }]
-> n { -> p { -> x { p[n[p][x]] } } }[-> p { -> x { p[x] } }]
-> p { -> x { p[-> p { -> x { p[x] } }[p][x]] } }
=> nil
Well, something definitely happened, but we didn’t get quite the
result we wanted: the final expression is -> p { -> x { p[-> p { -> x { p[x] }
}[p][x]] } }
, but the lambda calculus representation of the
number two is supposed to be -> p { ->
x { p[p[x]] } })]
. What went wrong?
The mismatch is caused by the evaluation strategy we’re using. There are still
reducible function calls buried within the result—the call ->
p { -> x { p[x] } }[p]
could be reduced to -> x
{ p[x] }
, for instance—but #reduce
doesn’t
touch them, because they appear inside the body of a function, and our semantics doesn’t
treat functions as reducible.[51]
However, as discussed in Equality, two
expressions with different syntax can still be considered equal if
they have the same behavior. We know how the lambda calculus
representation of the number two is supposed to behave: if we give it
two arguments, it calls the first argument twice on the second
argument. So let’s try calling our expression with two made-up
variables, inc
and zero
,[52] and see what it actually does:
>>
inc
,
zero
=
LCVariable
.
new
(
:inc
),
LCVariable
.
new
(
:zero
)
=> [inc, zero]
>>
expression
=
LCCall
.
new
(
LCCall
.
new
(
expression
,
inc
),
zero
)
=> -> p { -> x { p[-> p { -> x { p[x] } }[p][x]] } }[inc][zero]
>>
while
expression
.
reducible?
puts
expression
expression
=
expression
.
reduce
end
;
puts
expression
-> p { -> x { p[-> p { -> x { p[x] } }[p][x]] } }[inc][zero]
-> x { inc[-> p { -> x { p[x] } }[inc][x]] }[zero]
inc[-> p { -> x { p[x] } }[inc][zero]]
inc[-> x { inc[x] }[zero]]
inc[inc[zero]]
=> nil
That’s exactly how we expect the number two to behave, so
-> p { -> x { p[-> p { -> x {
p[x] } }[p][x]] } }
is the right result after all, even
though it looks slightly different than the expression we were
expecting.
Now that we’ve got a working semantics, let’s finish things off by building a parser for lambda calculus expressions. As usual, we can use Treetop to write a grammar:
grammar
LambdaCalculus
rule
expression
calls
/
variable
/
function
end
rule
calls
first
:(
variable
/
function
)
rest
:(
'['
expression
']'
)
+
{
def
to_ast
arguments
.
map
(
&
:to_ast
)
.
inject
(
first
.
to_ast
)
{
|
l
,
r
|
LCCall
.
new
(
l
,
r
)
}
end
def
arguments
rest
.
elements
.
map
(
&
:expression
)
end
}
end
rule
variable
[a-z]
+
{
def
to_ast
LCVariable
.
new
(
text_value
.
to_sym
)
end
}
end
rule
function
'-> '
parameter
:
[a-z]
+
' { '
body
:
expression
' }'
{
def
to_ast
LCFunction
.
new
(
parameter
.
text_value
.
to_sym
,
body
.
to_ast
)
end
}
end
end
As discussed in Implementing Parsers, Treetop
grammars typically generate right-associative trees, so this grammar
has to do extra work to accommodate the lambda calculus’s
left-associative function call syntax. The calls
rule matches one or more consecutive
calls (like a[b][c][d]
), and the
#to_ast
method on the resulting
concrete syntax tree node uses Enumerable#inject
to roll up the arguments
of those calls into a left-associative abstract syntax tree.
The parser and operational semantics together give us a complete implementation of the lambda calculus, allowing us to read expressions and evaluate them:
>>
require
'treetop'
=> true
>>
Treetop
.
load
(
'lambda_calculus'
)
=> LambdaCalculusParser
>>
parse_tree
=
LambdaCalculusParser
.
new
.
parse
(
'-> x { x[x] }[-> y { y }]'
)
=> SyntaxNode+Calls2+Calls1 offset=0, "…}[-> y { y }]" (to_ast,arguments,first,rest):
SyntaxNode+Function1+Function0 offset=0, "… x { x[x] }" (to_ast,parameter,body):
SyntaxNode offset=0, "-> "
SyntaxNode offset=3, "x":
SyntaxNode offset=3, "x"
SyntaxNode offset=4, " { "
SyntaxNode+Calls2+Calls1 offset=7, "x[x]" (to_ast,arguments,first,rest):
SyntaxNode+Variable0 offset=7, "x" (to_ast):
SyntaxNode offset=7, "x"
SyntaxNode offset=8, "[x]":
SyntaxNode+Calls0 offset=8, "[x]" (expression):
SyntaxNode offset=8, "["
SyntaxNode+Variable0 offset=9, "x" (to_ast):
SyntaxNode offset=9, "x"
SyntaxNode offset=10, "]"
SyntaxNode offset=11, " }"
SyntaxNode offset=13, "[-> y { y }]":
SyntaxNode+Calls0 offset=13, "[-> y { y }]" (expression):
SyntaxNode offset=13, "["
SyntaxNode+Function1+Function0 offset=14, "… { y }" (to_ast,parameter,body):
SyntaxNode offset=14, "-> "
SyntaxNode offset=17, "y":
SyntaxNode offset=17, "y"
SyntaxNode offset=18, " { "
SyntaxNode+Variable0 offset=21, "y" (to_ast):
SyntaxNode offset=21, "y"
SyntaxNode offset=22, " }"
SyntaxNode offset=24, "]"
>>
expression
=
parse_tree
.
to_ast
=> -> x { x[x] }[-> y { y }]
>>
expression
.
reduce
=> -> y { y }[-> y { y }]
[43] This is called currying, and we can use Proc#curry
to do this transformation automatically.
[44] We could certainly model printing to the console by introducing a proc to represent standard output and devising a convention for how to send text to it, but that would complicate the exercise in an uninteresting way. FizzBuzz isn’t about printing, it’s about arithmetic and control flow.
[45] To be more specific, what we want to implement here are the nonnegative integers: zero, one, two, three, and so on.
[46] This is called “iterating the function.”
[47] Actually, “takes two arguments” is inaccurate, because we’re restricting ourselves to single-argument procs (see Arguments). To be technically correct, we should say “takes one argument and returns a new proc that takes another argument,” but that’s too long-winded, so we’ll stick with the shorthand and just remember what we really mean.
[48] You might protest that 3 - 5 = 0
isn’t called
“subtraction” where you come from, and you’d be right: the technical name for this
operation is “monus,” because the nonnegative integers under addition form a commutative monoid instead of a proper abelian group.
[49] Most people associate it with the differential and integral calculus, a system concerned with rates of change and accumulation of quantities in mathematical functions.
[50] The correct behavior is to automatically rename the function’s parameter so that
it doesn’t clash with any free variables: rewrite -> x {
x[y] }
as the equivalent expression -> w {
w[y] }
, say, and then safely perform the replacement to get -> w { w[z[x]] }
, leaving x
free.
[51] We could fix this by reimplementing #reduce
to
use a more aggressive evaluation strategy (like applicative
order or normal order evaluation) that performs
reduction on the bodies of functions, but a function body taken in isolation usually
contains free variables, so that would require a more robust implementation of
#replace
.
[52] We’re taking a risk by evaluating an expression containing the free variables
inc
and zero
,
but fortunately, none of the functions in the expression have arguments with those
names, so in this specific case, there’s no danger of either variable being
accidentally captured.
18.189.170.134