Friday 11 April 2014

How does it feel to give a terrible conference talk?

Have you been to a conference and sat through an awful presentation and wondered just how the hell someone got there? Me too!

Recently I attended the ACCU conference in Bristol and got to experience what it feels like to deliver something that went down like a lead balloon. One evening many moons ago, I thought I'd send in a proposal. By some small miracle I got accepted and was all set to run a 90 minute introduction to Haskell.

I'd already run through the workshop once at a local user group. The material isn't amazing, but I was confident in delivering it and thought it offered people a chance to get a taste of Haskell and programming with functions.

Then the problems started. It's ACCU. It's full of clever people, therefore I should level-up the material and assume more knowledge. Right? I should make it more hands-on, more interactive and better in every way.

I prepared hard. I updated the slides. I added more and more. I wrote notes, I dug references and I was confident it would kick-ass.

And then the day arrived.

90 minutes seem like a long time. It isn't. I spent a good 15 minutes ensuring that everyone could run "hello world". Very rapidly 90 minutes because 60 minutes.

Then my cleverness got the better of me. The Curry-Howard isomorphism is fascinating, but perhaps it's not the best subject matter within the first 30 minutes of any presentation. Trying to explain it under pressure with questions from an audience eager to learn makes it even worse. I probably lost another 20 minutes trying and failing to explain that const :: a -> b -> a only has one valid implementation in Haskell. And what the hell are the poor attendees going to do with this information? GAH!

And so it continued. On to writing some code. I'd wanted to make it easier to compose higher order functions to produce results, so I'd made the initial data structures in the exercises a bit more complicated than those I'd shown in the example slides. Big mistake. This made it much harder for people to grok the syntax; I'd shown simple syntax but not given enough direction. 30 minutes rapidly disappeared and I'm now *way* behind schedule.

At this point, I'd already realized the situation was going Pete Tong. But what'd you do? You can't just down tools and walk out the room (well, I suppose you could, but that'd be worse), so you just have to knuckle down and carry on. And carry on I did, through more examples (well over-egged) and then onto the Universality of Fold (brain, what the hell are you thinking?!).

With 5 minutes left, there's plenty of time to through a demo of QuickCheck in, right?But then, I realized I'm in an Emacs buffer. How'd I increase the font-size so people can read it? GNARGH!! It's over to Notepad and bump the fonts up in that. "Should have used vi!" went the audience. ARGH!

And then the buzzer sounds (well, not really, but it's time to go). Bring things to a halt and escape to a corner of the building. I can't imagine that was particularly fun for the participants. A few people kept up (hurrah!) and there were a couple of positive things said, but I knew it'd gone wrong and it boy that doesn't feel good.

So, at least now I know how it feels (bad, very bad) and I also learnt an important lesson. Keep the message simple! Focus on the single takeaway you want participants to have. I wanted people to leave knowing that Haskell isn't impenetrable and looking at how far you can get just by reading type signatures. However, I lost this in a noise of other random related things and tried (and failed) to communicate a million and one other features.

KISS!

Wednesday 9 April 2014

Agile - What Next?

I'm at ACCU at the moment, and instead of preparing my talk on Haskell for Thursday, I'm writing up my notes from Bob Martin's talk on agile yesterday.

Agile was originally founded by a bunch of programmers over a decade ago. The aim (from Kent Beck) was to devise a system that eliminated the trust divide between programmers and managers (them and us). Transparency was the aim of the game. Programmers would record velocity using story points. Managers would track number of story points per sprint and produce burn-down charts. Everyone is happy.

Unfortunately, burndown and velocity charts track only one part of software development, features. There's a hidden part of software development that isn't captured by these charts, ability to change. If there's one thing for certain in software development it's that people will change their mind and features will need to adapt. It's no good having your software with the correct features today, if it can't have the correct features tomorrow. Arguably, a code bases ability to respond to change is the primary responsibility of the developers.

In the original light-weight process, XP, this was kept in check by Ron Jefferies concentric circles.

Concentric Circles

This, again, is part of transparency and trust. At the inner-level, TDD, pair-programming and simple design keep the software honest. A suite of tests gives transparency on the system functionality. Moving further out we reinforce these practices with collective ownership (transparency again, no siloed development). And so on, and so forth.

Fast-forward a decade or so, and where are we now? Agile is the domain of the manager. There are no developers at agile conferences any more, it's all about the secondary value of a software product (shipping features) rather than the primary ability (reacting to change).

The XP Practices have been forgotten. Scrum empowers teams to take ownership of their practices and opt out of ones that don't work. Of course, it's easier (in the short run!) to forget about TDD, simple design and refactoring. However, in the long run productivity grinds to a halt (see Design Stamina Hypothesis).

Bob argues (The Corruption of Agile) that agile doesn't exist without the practices that support it. I agree; most agile teams aren't agile in their ability to react to change. Martin Fowler has a term for it "Flaccid Scrum" where we adopt the project management side of it, but not the underlying practices for ensuring that the code base becomes malleable and responsive to change.

With all this in mind, the trust issues have reemerged. Dropping the velocity (number of story points per sprint) is a bad idea, so developers have rebelled. Let's just make the stories smaller. The points counted are the same, but the size of the stories is much smaller. Teams are wading through custard, developing features just as slowly as ever.

The thrust against this has come in the form "software craftsmanship". This is trying to reimagine the circles from the inside out, but it's failed. It's failed because it doesn't attempt to bridge the divide between the managers and the coders. It might help the engineers to "do the right thing" more often, but it doesn't show transparency.

And the talk ended there, no answers for the future and a little depressing. I've definitely seen the scenarios Bob describes, but what's the solution? It's probably not "kill all the project managers" as someone suggested. I'd love to make the "ability to change" a tangible concept that teams can explore and understand. It's not an easily measured property, but I think taking data-driven decisions about code is part of the answer. Project managers need options to meet business constraints. Sometimes it's OK to go quick and dirty, to spike a feature that may not live longer than a week, but you have to accept that the remedial cost of recovering from that burst of activity exists and understand the remedial cost.

Right, now to finish off a few slides for this Haskell thing.