Saturday, 28 September 2013

Agile Cambridge 2013 Day #3

Lean Coffee

The day started particularly early today with Lean Coffee. It's a great way to look at a wide-range of topics in a short time period.

I got a lot out of the discussion, particularly around technical debt. I need to try the Get Kanban game as that's engineered to show that quality does matter. Someone also mentioned a low-tech way of capturing technical debt; just place a dot on the board when you're affected by it. At least that demonstrates improvements!

Real Options in the Real World

Chris is from a financial background and presented his approach to IT risk management. The briefest of summaries is:

  • Options have value. (maybe an indeterminate value)
  • Options expire.
  • Never commit early unless you know why.

The never commit early unless you know why point echoed Neil Denny on day 1, where he spoke about the delicious discomfort of uncertainty. As humans, we find it easier to close options out, but in reality from a rational point of view we should keep our options open.

Humans are bad at risk management. We have a tendency to want to assign probabilities to failures so that we can pretend that it's not very likely to happen. We should instead focus on time to business as usual; what are the options for recovery?

Chris then looked at how options thinking applies to moving staff between projects. Based on the theory of constraints, there's only one bottle-neck in the system and (logically) we should move people to solve that problem regardless of their role (such as blur the lines between dev/test). We (as an industry) worry about this because of people's attitudes (I'm not going to do testing!) or people's capability (but Joe's the only COBOL guy!).

Chris presented a compelling case. We should (counter intuitively) assign people with least experience to critical tasks first so we can use the experienced people in more of a mentoring/coaching role. The best developers coach; others fix problems with their help. This helps eliminate the "project heroes". Staff liquidity truly delivers agile.

This was a very compelling case, but something doesn't feel right to me. It ignores the human factors. Software engineering is not manufacturing; it's a craft. People also don't have an infinite capacity to learn (or at least I don't!). If I'm switched between projects, and dancing between coaching, developing and testing then I feel my overall effectiveness in each of these areas will be reduced. I mentioned this on Twitter, and I was given some pointers to reading more about this

I'll try to find the time to read that work, along with the Commitment book.

The Art of Systematic Feedback

Marcin Floyran gave a series of examples of how feedback is useful. I've long been sold on this point, and it was great to see it backed up by a series of examples. I've summarized Marcin's formula for systematic feedback below:

  1. Explicit Assumptions (scientific reasoning!)
  2. Clear Objective
  3. Careful design
  4. Learn from Results
  5. Rinse and Repeat

You know you're doing feedback right when you are "acting responsibly to meaningful data". I tried to argue before that we need to do this for coding and encode the explicit assumptions with deliberate development and this has helped me see the pieces that I was missing.

The AutoTrader Experience report

All too often in software development we focus our attention on the 1% of software projects that go well, and we never really look at the problems. The Auto Trader experience report was fascinating and brutally honest. They showed how despite being agile projects can still fail.

Given an impossible deadline, the team panicked. They tried incredibly hard to estimate the project to prove that it was mission impossible, but this didn't help. Instead the team was given unlimited resources (large numbers of contractors) and just told to "do it". It was great to see an insight into the panic, and this video is definitely one to watch on InfoQ when the video goes live!


Gamification - How I became a spaceship commander

Tomasz presented an entertaining study in gamification. The goal was to influence the behaviour of software developers to use their bug tracking system and track tasks. It was interesting to see the behaviours that this encouraged within developers. Definitely an area to spend some more time with.

Wrapping up

So what do I take from Agile Cambridge 2013? An awful lot of reading material, a huge collection of ideas floating around, some practical tips for Monday. Job done.

Agile Cambridge 2013 Day #2

Conference Cold. Every conference I attend seems to result in me getting ill. If you see me turning up next time in a mask, then you'll know why.

Conference Protection

Anyway, back to the writeup.

Change or be changed

Change. The ever-present moment of opportunity or terror that's been a staple of every company I've ever worked at. Janet Gregory explored the various types of change that occur in life.

Do you need or want to change? Sometimes you want to change (wouldn't it be great if I was fit?), but sometimes you have no choice(Your health is suffering, you have to get fit). Change can often bring new opportunities (Ford made cars, others wanted faster horses).

Towards the end, Janet highlighted some of the change models. I caught most of them, and I'll push them on my unbounded stack of reading material.

Making Sense of Systems Development

Cynefin. Kee - ne - fen. This is a word I've heard much about. And now I can pronounce it. It's a non-sense making framework (that was a cheap shot, sorry).

The workshop was well-run, and we classified situations as either:

  • Simple (the answer is clear, no need for analysis)
  • Complicated (solvable by an expert or process)
  • Complex (might know what to do better next time, hindsight)
  • Chaos (totally new, no idea how to do things)

There's something about this classification that feels familiar. Learning a new subject traverses from chaos (I've no idea what I'm doing!) to complicated, through complex and then finally simple (unconscious competence) and I can see how it would be a useful tool in the armoury. I didn't see anything that fundamentally changed my opinion (maybe something will click later?)

I'm very sceptical of things you have to pay for to understand (see ScrumMaster courses, Agile certification, Scientology and from the looks of it Cynefin.). I coin Jeff's law:

Blurring the Lines

Chris George presented on blurring the lines. The central premise is that by putting walls between dev and test, we've suffered. By breaking down these walls, we can produce better software.

Chris references one of my favourite papers, the 1968 NATO Report on Software Engineering and starts by introducing a quote by Alan Perlis along the lines of "testing is a process best undertook throughout the product life cycle".

Dedicated testing departments split this process, introducing artificial communication barriers which (due to Conway's Law) resulted in a split between development and test. Chris looked at breaking down this wall and merging the roles of test and development and the positive effects it had on his team.

This was a theme revisited by Chris Matts in the next days key-note. There's something I don't quite agree with from both talks, namely that it assumes that people are fungible resources. More to come on this, once I get my thoughts together.

The Open Session

Due to the conference flu, a number of speakers dropped out at the last-minute. Huge kudos to the organizers for managing to fill sessions at the last minute with interesting and relevant content. The final session of the day was an open session organized by Simon and Ryan.

Our group discussed controversial opinions.

I'm not sure we tried hard enough to rock the boat!

Wednesday, 25 September 2013

Agile Cambridge 2013 Day #1

I'm at Agile Cambridge 2013 this week. I participated in the review panel this year and there was a huge amount of quality submissions. The programme looks fab and I'm looking forward to the rest of the week. Here's my notes on the sessions I've attended on the first day.

Moving Towards Symbiotic Design

The opening keynote by Michael Feathers explored Conway's Law which states:

"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."

He gave some examples of this in action, and I found it easy to relate these to experiences of my own. When working with remote teams we've struggled with communication issues, and so has the code. The boundaries between "our" module and "their" module was as cloudy and wooly as the communication between the teams.

I've also seen the Tragedy of the Commons that Michael described. I've worked at some places where shared code had no clear owner and thus depleted over time. On the flip side, I've seen shared code be tremendously successful when a team was devoted to it full-time (and that team was highly communicative). Conway's law holds!

Michael's observation that people working on a code base for too long become immune to the cruft that's accruing hit home. Uncle Bob has this great metaphor of developers understanding the geography of their code when working with large classes (this function is just here, after all this whitespace, and the really long method beginning foo...). I think this kind of complexity is what developers who work on a project for too long get comfortable with. When you're a new person on that code base you don't have that historical knowledge and you struggle to pick it up.

So what would change if code were king and people were subservient to it? It's an interesting question. People working on the same code for a long time become immune to the cruft that's accumulated within it. The code is screaming out for someone else to tend to it and address this. With this in mind we should probably be more deliberate about moving people around between software projects

If code was a first class citizen in project discussions then we'd pay more attention to things like "death by 1000 features". There's a point at which adding features to a project becomes considerably more difficult. Perhaps we should push back more and do what the code wants sometimes and not just caving in and sacrificing the code base?

The takeaway for me that we don't do a good job talking about code as a party in the relationship between people and process. We're already happy to change project structure to deliver the product, but let's start to explore changing project structure to drive the code in positive directions.

Revere the code!

Unpicking a Haystack

I thoroughly enjoyed the Unpicking a Haystack session by Duncan McGregor and Robert Chatley. We took perhaps the worst example imaginable of legacy code (decompiled from binaries!) and tried to make sense of it.

There were different approaches tried. One group did a lot of reading (they only had a text editor, not an IDE). Another tried to get a foothold in the code, test it and test their way to success. The group I was in took a slightly different approach and leaned heavily on the IDE. I've touched on this idea before (You don't need tests to refactor).

Our approach was to crowbar out large lumps of code with repeated "Extract Methods", rename things to make sense and use the IDE inspections to fix all the silliness (e.g. redundant variables, constant parameters, unused methods). It seemed to work pretty well, but I think there was some skepticism among others as to whether this was a good idea or not.

I hope in the future development tools will evolve to become something more developers trust and code becomes malleable putty we can shape in seconds. As a thought experiment, imagine if you could put your IDE in "semantic lock" mode and just drag, drop and reshape code to your heart's content with the guarantee that behaviour would be preserved. How would that change your ways of viewing legacy code? How would it change your design process?

To wrap up the group discussed how they might change the way they write code in light of the problems of refactoring. I didn't say anything at the time, but I've thought some more about it. Decompilation loses names (not all names, but some names), but decompilation always preserves types. I've waffled before about Types and Tests.

With this in mind, we should be more aggressive about encoding information within types (e.g. compiler verifiable properties) rather than names. As an example, we found some code with a value that either referenced working directories (pwd) or a password. The type information told us it was a string. The variable name was str12345. We had no idea which one to choose. If the original authors had encoded that information in a type, then we wouldn't have had a problem, the type would describe the shape of the variable.

Rob and Duncan are organizing a Software Archaeology Conference which looks fabulously interesting.

Rituals of Software Engineering

Alex Shaw examined the topics of rituals in software engineering. Rituals have a long history, and the essential parts of a ritual are:

  • Obligatory (not in the sense of being forced, but you feel like you should attend.
  • Admit of Delivery (the style of delivery doesn't matter
  • Consequence Free (measurable outcomes are not the goal

I guess rituals has a slightly negative connotation for me. When I hear rituals I think of animals being slaughtered and cargo-cultism (obviously not a healthy way to do things. From the wikipedia definition:

A ritual "is a stereotyped sequence of activities involved gestures, words, and objects, performed in a sequestered place, and designed to influence preternatural entities or forces on behalf of the actors' goals and interests"

By the end of the presentation I was more convinced that I was disagreeing with a word and not the content. The essences of great team 'rituals' included that members of the team know the context and understand the value and reason for the ritual. With all that in mind, I agree rituals are important for engineering teams and it's a really interesting idea to explore.

Growing XP Teams

Rachel Davies gave us a great insight to how the team work at Unruly media, a company run from the start on XP principles.

I love these kind of experience reports. Speaking afterwards, The Agile Pirate described it as voyeurism. I like that description (slightly sordid though it sounds). It's great to get an insight into how a team works and interacts.

It was interesting to hear how the "Lone Ranger" role worked at Unruly. The idea of having a single developer being available to answer questions from support/sales/product managers is interesting, and I can see how that could help protect the other members of the team from unwanted interruptions.

We don't know where we're going. Let's go!

Last, but by no means least, was a talk with an intriguing title by Neil Denny. This was my favourite talk from the first day, and had nothing to do with software whatsoever!

Neil's talk explored uncertainty, that "delicious discomfort of not knowing". Uncertainty is something we face all the time in software development (because no-one knows how to do it right), so it was great to explore this topic. The presentation style was fantastic with audience participation, touches of humour and an engaged audience.

What's more dangerous than uncertainty? "We are never more wrong that when we are most right". Once you convince yourself that you definitely have the answer then you become closed to better solutions, becoming dogmatic and rejecting alternatives. This is a dangerous place!

The point I left with is that we should treat uncertainty is a challenge to savour, not something to fear. When people are looking for answers, they tend to just want the smallest possible change (confirmation bias?). This often makes us go for the minimal change, rejecting the truth of what we need to do.

There were a few books mentioned which I need to add to my reading list

And then...

So there I am, it's the end of the day and I'm outside the college waiting for a taxi. I get talking to someone else waiting and I discover that she's a retired mid-wife. Nothing too strange yet. And a bit more talking and I discover that she's from the same area that I was born in. A few minutes later, we talk about times and ages, and realize that she would have worked at the same hospital I was born in. Putting more dates together we discover that I've just bumped into a person who possibly delivered me. Freaky

Looking forward to tomorrow!

Oops, should have mentioned we're hiring.

Friday, 20 September 2013

How do you write software?

It’s a really simple question, but one that’s hard to answer. If you start by saying "I take a story from the board" then let me stop you right there. I'm not interested in the process, I'm interested in what happens between after the choice of what to do and before the done bit.

Maybe you’re answer involves the tools you use? You might begin with "I write in Emacs; let me tell you about my setup...". Nope. Not interested in that. Glad it works for you, but it doesn't really tell me about how you really construct software.

Maybe your answer involves the syntactic rules you use? You write software with spaces, not tabs and all your statements are semi-colon terminated and it’s either K&R or the highway? Yawn. That’s not what I'm after.

How do you actually build your software; what happens between the brain and the keyboard?

Half-baked practices

After pushing a bit more, I ask again and I sometimes get responses like this:

  • All the significant code I write is peer-reviewed.
  • I try to write unit-tests when I can.
  • Most of my bug fixes contain a regression test.

What do they have in common? They are all weak statements that don’t really tell me (and most importantly YOU) anything about the way you develop software. They probably tell me that your heart is in the right place, but it doesn't show any commitment.

Every single one of these statements has an escape hatch. It's far too easy to ignore these values. You can almost imagine the excuses now. This is just a small commit, it doesn't need a review, let alone a unit test. This bug I just fixed, it’s so tiny, so inconsequential that it doesn't need a regression test for it.

I'm not going to try to argue that you should never use an escape hatch, but to word things so generally is simply inviting temptation.

Trying to strengthen those practices

So what happens if we strengthen our statements a bit more? Let’s take the first half-fledged principle and try to improve it a little.

  • All code will be peer-reviewed

Peer reviewing all code. That's undoubtedly a good thing, and it certainly doesn't have an escape hatch. Or does it? The problem with this statement is it’s far too vague. What is the code being reviewed for and to what criteria? Perhaps it’s obvious to you, but what about the others on the team. Often you’ll ask around and get different answers from members of the team.

Is it a formatting check? I hope not, because you’ll have automated that and stopped arguing about it a long time ago. Right?

Is it checking for obvious mistakes? Maybe. That’s definitely a good start, but it’s still a bit woolly. What's obvious to you probably isn't so obvious to someone else on the team.

I'd suggest a strong set of criteria creates a tighter statement. For example:

It doesn't really matter to me (though it should matter to you) what the criteria is, the most important thing is that it’s strong enough for a disciplined code review.

So what good are fully fledged practices?

OK, so you've agreed on a couple of practices. What’s that actually going to do for you? Once you've got a disciplined way of developing software you can start to reflect on how things are working for you.

You've got a set of fully fledged practices. These practices are hopefully challenging you to think much more carefully about the way you construct software. Perhaps you've committed to getting all code peer-reviewed? Chances are you've got some doubters on your team? How can you demonstrate these practices are working?

Look at your practices; how do they translate into outcomes? Perhaps you want a code review before every push because you are trying to unpick a mess of legacy code? Maybe you’re trying to roll out test-driven development because your integration tests take hours to run? For any outcome, you can probably think of something you can compare that will help you know whether it's working or not. Perhaps you could measure whether your code coverage increases? Maybe you could take a look at the cyclomatic complexity of the code? How many times did the build break this week? You don't even need automated measures, you could simply ask opinions.

Put together, practices and retrospectives give you a base to reflect on the way you develop software and try to find ways to improve upon it. Donald Knuth did this. For the entire history of the Tex program, Knuth kept a bug journal recording the how's and why's of everything that went wrong. I've never seen a quote suggesting so, but I'd imagine it's hard not to become a better developer by reflecting on what works and what doesn't.

Putting it all together

Every team dislikes something about their software. Maybe it's harder to change than it was a year ago? Maybe the compilation time takes forever because of all the coupling? Maybe someone else wrote it and it just plain sucks? Flip the problems around and you've got goals to achieve. You want the ability to change code quickly and easily. You want a 10 minute build. You want simple and maintainable code. Now you have measurable outcomes!

This is where deliberate practices come into play. Now you've chosen some practices you can start to see how much difference they make on day-to-day development. I'd suggest that most practices probably need a good few months before they've truly bedded in. It's worth reflecting on practices more often though, perhaps you can see benefits sooner?

There is no such thing as a one-size-fits-all approach to software development. The idea of best practice is a complete myth; it's about what works for your project, your team and associated human factors and your work environment. By being deliberate about the practices you use then you can attempt to find what works best for you.

This is obviously a grossly simplified approach. The real world is messier and there are constraints all over the place, but I still think that being deliberate about the way we create software is an important step in the continuing journey to become a better software developer

So, how do you write software?

Thursday, 5 September 2013

Programming Epiphanies

What is a programming epiphany? It's that moment that you have when you realize that the way you've coded is wrong wrong wrong, and there's a better way to do things.  Here's a few of my programming turning points.
When I was studying Computer Science at the University of Southampton object-oriented programming was simply:
  • Inheritance
  • Encapsulation
  • Polymorphism
Inheritance was first, and that meant it was the highest priority for me.  If I could find a way of inheriting A from B, I probably would.  Encapsulation?  That's just wrapping all those lovely members with getters and setters.  Polymorphism wasn't something I really ever thought about.  It came last so it was probably something I could get away without.

Around this time, I was programming using C++ and Microsoft Foundation Class.  My understanding of MFC at the time gelled quite well with how I understood object-oriented code.  Plenty of inheritance!  I even felt I saw a use for polymorphism with overriding some of those virtual function things.

Things got a bit better towards the end of my degree.  I found a copy of Effective C++ and read about const correctness.  I remember having particularly knotty issues in some of my code (probably due to my understanding of encapsulation) and not being able to find the bug.  By liberally sprinkling const over the code base (it's like a virus!) I eventually found my unwanted mutation.  My first epiphany; design code correctly and make the bugs impossible.

I bumbled my way through a research degree, and then became a research scientist for a bit.  I never really wrote code that anyone else had to read, so my code was just good enough.  My next big leap in learning came with my next job. For posterity here's the original advert (no idea how I got in!).

Day 1.  Someone mentions this thing called the visitor pattern, then ubiquitous language and then a few more things.  WTF?  Visitor pattern?  Names matter? Oh dear, there's a huge amount I don't know.  I managed to get through the day without getting found at, visited Amazon and ordered a few dozen books.  My second epiphany, smarter people than me have likely solved your problems before; get reading.  I went through my pattern craze, no doubt needlessly applying them sometimes, but I worked it out of my system.

At some point came another one, singletons are bad.  I think everyone realizes this at some point and has an instant recoil of all design patterns.  I love functional programming, and so I remember finding Design Patterns in Functional Language presentation and thinking to myself that maybe patterns are just missing language features?

All good things must come to an end, and next I turned to the dark side of enterprise programming.  If you don't know what this is, it's very simple.  A sales person promises impossible to a clueless manager, and then a team of software engineers will work at solving the impossible problem with an equally impossible deadline. I identified a huge amount with Death March, but lacked the gumption to quit.  Not all was bad though, by seeing every possible variant of wrong I learnt something incredibly important.  You can't tolerate complexity.

Quality isn't just something you can get back another day; quality matters.  Once you've lost quality, once you've lost clean code, you're fucked.  You might get away with it for a bit (the human brain can deal with remarkable amounts of complexity), but in the end that ball of mud will crush you.

So how'd you build in quality from the ground up?  As part of my job, I visited an extremely Californian company that practiced XP.  All code pair-programmed, all code with a failing test first.  This made a big (though not very immediate!) impression on me.  It wasn't until I read Growing Object Oriented Software Guided By Tests that it clicked in a way that felt right and TDD seemed a bit more natural.

So what are your programming epiphanies? What are the moments in your development so far that changed the way you think about writing code?