Thursday 6 August 2009

Less Lazy about Understanding Laziness

A recent post by Uncle Bob made me realize how little I understand about laziness and how it affects the computational complexity of a solution. The quote below from the post sums it up well:

You don’t get the benefit of lazy sequences unless you use the lazy sequences. Creating a brand new one each time may generate the right answer, but it’s lazy, and can burn a lot of execution cycles.


Two macros in clojure.contrib.seq-utils help to understand this in more detail. rec-seq and rec-cat that are used to create sequences that refer to themselves without generating new copies each time.

You can use these macros to efficiently calculate values with lazy sequences. For example, the Lucas Numbers can be calculated as:


user> (take 10 (rec-cat s [2 1] (map + s (next s))))
(2 1 3 4 7 11 18 29 47 76)


So how do rec-seq and rec-cat hold onto the same copy of the lazy sequence without creating new ones? An atom (not sure why it's not a local var?) holds onto the current sequence for the whole function, swapping the value as necessary.



So the atom is set to the next value of the body (in this case map). Since the next value of the map expression is defined in terms of itself this sets up a recursive definition which, because of the atom and reset! uses the same lazy sequence for the whole duration.

I think that's going to take a little while to sink in...