Thursday, 10 October 2013

Software Architect 2013 Day #2

What's wrong with current software architecture methods and 10 principles for improvement

Tom Gilb showed us a heck of a lot of slides and tried to convince us that we must take architecture seriously. I don't disagree with this, our industry could definitely do with a bit more rigour. Tom was very forthright in his views, and I appreciated his candour.

The system should be scalable, easy customizable and have a great user interface

That's a typical "design constraint" that we're probably all guilty of saying. This is nothing more than architectural poetry (putting it politely) or complete and utter bullshit. In order to take architecture seriously we should measure. Architecture is responsible for the values of the system. We should know these values and be able to measure them. If a given architecture isn't living up to these values, we should replace it with something that does. Architecture exists solely to satisfy the requirements.

Real architecture has multi-dimensional objectives, clear constraints, estimates the effects of changes. Pseudo-architecture has no dedication to objects and constraints, no ideas of the effects and no sense of the relationship between the architecture and the requirements.

If we're going to take architecture seriously, then we need to start treating it as engineering. We must understand the relationship between our architecture and the requirements of the system. We must demonstrate that our architecture works.

And then the wheels came off.

I don't work with huge systems, but I can clearly see that understanding the relationship between an architecture and the requirements is a good thing. Unfortunately, Tom presented examples from a domain that was unfamiliar to me (300 million dollar projects). In the examples, incredibly accurate percentages were shown (302%). At that point, I lost the thread. Estimates are just that, and if experience has taught me anything, it's that estimates have HUGE error bars. I didn't really see how all that planning up front led to a more measurable design. I've got a copy of Tom's book, Competitive Engineering so hopefully I can fill in the blanks.

Building on SOLID foundations

Nat Pryce and Steve Freeman gave a thought-provoking presentation entitled "Building on SOLID foundations" which explored the gap between low-level detail and high-level abstractions.

At the lowest level we have guidelines for clean code, such as the SOLID principles. At this level, it's all about the objects, but not about how they collaborate and become assembled into a functioning system. Even with SOLID principles applied, macro level problems occur (somehow all related to food metaphors), colourfully referred to as "raviolli code". Individual blocks are well organized, but as a whole it still looks like a mess. "Filo code" is code that's got so many layers you can't tell what's going on. "Spaghetti and Meatballs" code is an application with a good core, but the communication glue surrounding it is a huge mess.

At the highest level we have principles such as Conway's Law, Postel's Robustness Principle, CAP, end-to-end principle and REST.

But what's in the middle?

In the middle there are some patterns, such as Cockburn's, Hexagonal Architecture that help us structure systems as an inner domain language surrounded by specific adapters converting that data to the needs of the client. The question remains though; what are the principles between low and high level design?

Nat and Steve assert that compositionally is the principle for the middle. We should adopt a functional type approach and build a series of functions operating on immutable data in a stateful context. That sounds complicated, so what does code written in this style look like? Hamcrest gives us some examples, where by using simple combinators (functions that combine data) you can build up complicated expressions from simple operations (see the examples).

Having done a fair bit of Haskell I found it really easy to agree with this point of view. When there's no mutable state you can reason about code locally (and not checking for mutation). Local reasoning means that I can understand the code without jumping around. This is a hugely important part of a well-designed system.

I was slightly concerned to hear this style of programming as Modern Java. I hope it's not, because using Java like this feels like putting lipstick on a pig. One of the things I value in Haskell is that composition is a first class citizen. Partial application, function composition and first class functions mean that gluing simple code together to make something powerful is incredibly easy. I hope we're at that awkward point in language evolution where we're stretching our current languages to do things they don't want to do. Maybe this is finally the time when a functional language hits mainstream? (maybe it's Clojure or Scala.

We tried adopting this style of programming at Dynamic Aspects when building domain/j [PDF]. It was fantastic fun, and I really love Java's static imports for making the code lovely and terse (finding out $ is an operator also helps). Something about it feels dirty though. Haven't quite put my finger on what that was then, and hopefully with lambdas in Java 8 it's more natural.

So what is the bit in the middle? The bit in the middle is the language that describes your domain. Naming is everything and you should do whatever you can to make it easiest to understand. Eschewing mutable state and using functional programming to compose multiple simple operators seems to work!

Agile Architecture - Part 2

Allen Holub gave a presentation on agile techniques for design. Allen examined the fragile base class in some depth, before recapping CRC cards (not used enough!). Allen is a good presenter, so it was great to have a recap and have a few more examples to stick in my brain!

.

Leading Technical change

Nate closed out the day by giving a presentation on Leading Technical Change. It was well presented and focused on two things. How do you keep up with technology and how do you engage your organization to move to different technologies?

Nate presented some really disturbing statics about how much time Americans (and presumably other countries) waste on TV. Apparently the average American watches 151 hours of TV a month! Wow.

Nate introduced the audience to the idea of the technology radar which allows you to keep track of technology that is hot for yourself or your organization. We're trying to build one at Red Gate. We've also experimented with skills maps too, and you can see an example from a software engineering point of view here (love to know what you think?).

Introducing change is hard, and Nate presented the same sort of ideas that Roy presented the previous day. Change makes things worse in the beginning, but better in the end. Having the courage to stick out the dip is a hard thing! (image from here)

I have to admit, I didn't take many notes from this talk because I was enjoying it instead :) It was well presented and engaging with the audience. In summary, change is hard and it's all about the people. I think deep down I always knew that (people are way more complicated than code) but it was great to hear it presented so well!