Wednesday, 10 June 2009

Generating Images using Image Magick

In order to play with neural networks for OCR, I needed to create a series of test images.

ImageMagick is a set of command line tools that provide image manipulation functions. It's pretty simple to write a quick script to generate a bucket load of images.

The script below generates a load of images of digits between 0 and 9 using every font available within image magick,


#!/usr/bin/perl
use strict;
use warnings;

# Gives me a list of fonts
my @fonts = split('\n', qx{identify -list font | grep Font: | cut -d: -f2});
my @numbers = (0..9);

foreach my $font (@fonts) {
chomp($font);
foreach my $number (@numbers) {
chomp($number);
my $filename = sprintf("%s_%s.gif", $font, $number);
my $commandLine = "convert -fill black -pointsize 8 -size 8x8 -font$font label:$number$filename";
qx{$commandLine};
}
}


5 minutes later, and I've got 340 test images. Neat.

Generated images showing big list of numbers

Sunday, 7 June 2009

Learning some lessons from Programming Clojure

My copy of Programming Clojure by Stuart Halloway arrived over the weekend. It's a thoroughly enjoyable read so far and I've definitely learned a few things (for example, I hadn't realized that keywords can act as functions for maps, nor did I realize that a set is a function for determining membership. D'oh


;; A keyword is a function
user> (:houses {:biscuits 4 :houses 7})
7

;; A map is a function
user> ({:biscuits 4 :houses 7}) :houses})
7

;; A set is a function on membership
user> (#{4 5 6} 7)
nil

;; On success returns the element
user> (#{4 5 6} 4)
4


The section on Functional Programming was very interesting, especially with respect to laziness. Stuart presents "Six Rules of Clojure FP", which have a heavy emphasis on laziness. Item 5 is:

Know the sequence library. You can often write code without using recur or the lazy APIs at all


Looking back at my neural network code made me realize I'd reinvented several wheels there and the code could be significantly tidied by just using the standard libraries. Revisiting the sequence documentation helped remind me the various tools in the armoury, particularly iterate and reduce.

As a reminder, the two lumps of code in question are run-patterns and train-network.



(defn run-patterns
[network samples expecteds]
(if (empty? samples)
network
(let [expected (first expecteds)
sample (first samples)
[ah ao] (run-network sample network)
updated-network (back-propagate expected sample [ah ao] network)]
(recur updated-network (rest samples) (rest expecteds)))))

(defn train-network
([samples expected iterations]
(train-network (create-network (count (first samples))
num-hidden (count (first expected)))
samples expected iterations))
([network samples expected iterations]
(if (zero? iterations)
network
(recur (run-patterns network samples expected) samples expected (dec iterations)))))


Starting with run-patterns. This takes a sequence of values (the samples and expected values) and reduces it down to a single output (the network) using the supplied function. This can be refactored to be a simple reduce operation (sequence in, value out) and this simplifies the code significantly.


(defn run-patterns
[network samples expecteds]
(reduce
(fn [n expectations]
(let [[sample expected] expectations
[ah ao] (run-network sample n)]
(back-propagate expected sample [ah ao] n)))
network ;; initial value
(map list samples expecteds)))


Next is train-network. At the moment the implementation is burdened by a hand-rolled looping construct for a specified number of iterations. What if instead of calculating a fixed number, we just calculated an infinite amount of trained neural networks (with laziness obviously!) and let the caller decide what value they'd like?

The new version of train-network drops the number of iterations and returns an infinite sequence of neural networks, each trained one more time than the last. The Clojure function, iterate does the job here:

Returns a lazy sequence of x, (f x), (f (f x)) etc. f must be free of side-effects


Once we refactor to use iterate we get the following. The definition of train-network is considerably simpler, and the only change to the example function is specifying the number of iterations to use.



(defn train-network
([samples expected]
(train-network (create-network (count (first samples))
num-hidden (count (first expected)))
samples expected))
([network samples expected]
(iterate (fn [n] (run-patterns n samples expected)) network)))

(defn example[]
(let [x (nth (apply train-network xor-test-data) 100)]
(println (first (run-network [0 0] x)) "-->" 0)
(println (first (run-network [0 1] x)) "-->" 1)
(println (first (run-network [1 0] x)) "-->" 1)
(println (first (run-network [1 1] x)) "-->" 0)))



I've still got to read the remaining chapters of the book, but I'm looking forward to learning more. Definitely recommended.

Wednesday, 3 June 2009

Back propagation algorithm in Clojure

In my previous post I looked at the most basic type of neural network, the single layer perceptron. Next, we'll look at the multi-layer perceptron network. This is a more powerful class of neural network than the single layer because it can handle non linearly separable data (such as the XOR test case which failed last time).

The main difference in a multi-layer perceptron is that each neuron is only activated based on the results of an activation function. This makes use of the kernel trick that maps a non-linear problem into a higher dimensional problem which is linearly separable (see also support vector machines). The mathematics behind this is better explained here [PDF].

A typical activation function is tanh, shown below between -10 and 10 and generated using Wolfram Alpha

Graph of tanh(x) from -10 to 10 from Wolfram Alpha

So how do we model this network in a functional programming language like Clojure? A typical solution using an imperative/OO language (see here for a good Python implementation) would use arrays representing the weights and modify things in situ, but that's not very functional.

We start by defining some constants (including the activation-function and its derivation). bp-nn represents the network itself.


(def activation-function (fn [x] (Math/tanh x)))
(def activation-function-derivation (fn [y] (- 1.0 (* y y))))
(def num-hidden 2)

(def learning-rate 0.5)
(def momentum 0.1)

(defstruct bp-nn :weight-input :weight-output :change-input :change-output)


Next we create some simple initialization functions to create an initial neural network, together with some helper functions for iterating over matrices (which we'll model as lists of lists). Usually you'd use random weights to initialize things, but allowing fixed values makes testing possible.



(defn make-matrix
[width height]
"Create a matrix (list of lists)"
(repeat width (repeat height 0)))

(defn matrix-map
[m func]
"Apply a function to every element in a matrix"
(map (fn [x] (map func x)) m))

(defn rand-range
[l h]
"Return a real number within the given range"
(+ (rand (- h l)) l))

(defn create-network
([input hidden output]
(create-network input hidden output true))
([input hidden output use-random-weights]
"Create a network with the given number of input, hidden and output nodes"
(let [i (inc input)
w-func (if use-random-weights (fn [_] (rand-range -0.2 0.2)) (fn [_] 0.2))
o-func (if use-random-weights (fn [_] (rand-range -2.0 2.0)) (fn [_] 2.0))]
(struct bp-nn
(matrix-map (make-matrix i hidden) w-func)
(matrix-map (make-matrix hidden output) o-func)
(make-matrix i hidden)
(make-matrix hidden output)))))



The first thing we should do is run a pattern through the network and see what comes out the other end. We're not just interested in the output result, we want to know what happened at the hidden layer, so we return a vector of ao (activation output) and ah (activation hidden).

comp is functional composition. ((comp x y) 5) is the equivalent of (x (y 5)) so in the example below we add the numbers together and then apply the activation function. The nested map calls allow us to iterate over the elements in a matrix.



(defn apply-activation-function
[w i]
"Calculate the hidden activations"
(apply map (comp activation-function +) (map (fn [col p] (map (partial * p) col)) w i)))

(defn run-network
[pattern network]
"Run the network with the given pattern and return the output and the hidden values"
(assert (= (count pattern) (dec (count (network :weight-input)))))
(let [p (cons 1 pattern)] ;; ensure bias term added
(let [wi (network :weight-input)
wo (network :weight-output)
ah (apply-activation-function wi p)
ao (apply-activation-function wo ah)]
[ao ah])))


In order to perform backwards-propagation we need a couple of helper functions that work on matrices and vectors to calculate changes in the output that will be used to update the weights.

These helper functions are pretty sick - (no-one wants to read (map (partial reduce +) ...). A better design would probably be to introduce a proper matrix abstraction. There's the beginnings of one here but this is a bit too "Java" syntax heavy for more liking.



(defn calculate-hidden-deltas
[wo ah od]
"Calculate the error terms for the hidden"
(let [errors (map (partial reduce +) (map (fn [x] (map * x od)) wo))] ;; Sick.
(map (fn [h e] (* e (activation-function-derivation h))) ah errors)))

(defn update-weights
[w deltas co ah]
(let [x (map
(fn [wcol ccol h]
(map (fn [wrow crow od]
(let [change (* od h)]
[(+ wrow (* learning-rate change) (* momentum crow)) change]))
wcol ccol deltas))
w co ah)]
[(matrix-map x first) (matrix-map x second)]))


I did warn you...

The next thing we need to implement is the back-propagation algorithm itself. This takes in more parameters than it needs to so that it can be tested standalone (it could be implemented as a local function using a closure to capture some of them). It returns an updated version of the neural network.



(defn back-propagate
[target p results network]
"Back propagate the results to adjust the rates"
(assert (= (count target) (count (first (network :weight-output)))))
(let [pattern (cons 1 p) ;; ensure bias term added
ao (first results)
ah (second results)
error (map - target ao)
wi (network :weight-input)
wo (network :weight-output)
ci (network :change-input)
co (network :change-output)
output-deltas (map (fn [o e] (* e (activation-function-derivation o))) ao error)
hidden-deltas (calculate-hidden-deltas wo ah output-deltas)
updated-output-weights (update-weights wo output-deltas co ah)
updated-input-weights (update-weights wi hidden-deltas ci pattern)]
(struct bp-nn
(first updated-input-weights)
(first updated-output-weights)
(second updated-input-weights)
(second updated-output-weights))
))


All that remains is to train the network. We need a set of samples with know results, together with a number of iterations to try. I've split these into run-patterns which runs through the patterns once, and train-network> which creates the initial network and runs it through the patterns the specified number of times.



(defn run-patterns
[network samples expecteds]
(if (empty? samples)
network
(let [expected (first expecteds)
sample (first samples)
[ah ao] (run-network sample network)
updated-network (back-propagate expected sample [ah ao] network)]
(recur updated-network (rest samples) (rest expecteds)))))

(defn train-network
([samples expected iterations]
(train-network (create-network (count (first samples))
num-hidden (count (first expected)))
samples expected iterations))
([network samples expected iterations]
(if (zero? iterations)
network
(recur (run-patterns network samples expected) samples expected (dec iterations)))))


So how well does it work in practise? Pretty damn good. It correctly converges after a few iterations (this time 100) and consistently gets the XOR test data set correct.


(defn example[]
(let [x (apply train-network (conj xor-test-data 100))]
(println (first (run-network [0 0] x)) "-->" 0)
(println (first (run-network [0 1] x)) "-->" 1)
(println (first (run-network [1 0] x)) "-->" 1)
(println (first (run-network [1 1] x)) "-->" 0)))

;;; Example run through the REPL
uk.co.fatvat.mlperceptron> (example)
(0.10717792758953508) --> 0
(0.993502708495955) --> 1
(0.9930515903590437) --> 1
(0.00883530181467182) --> 0
nil


Total code weighs in at ~100 of so lines of purely functional code. This made it a doddle to test. Coding this was an exercise in map masochism. Never have I had to construct so many convoluted map expressions. At least it works in the end. It feels like it would be considerably simpler to implement this using mutability, I might try that for comparison purposes. Any suggestions on improving the code much appreciated! Full code is on GitHub.

(update - read some of Programming Clojure and applied some of the lessons learnt - see here.)

Neural networks are pretty interesting and in the next post I'll look at how to implement basic OCR using them.