Tuesday, 14 August 2012

NATO on Software Engineering

The Nato Science Committee report on Software Engineering (available here) is a fantastic read.

A study group was formed and given the task of assessing the entire field of computer science. The name "Software Engineering" was deliberately chosen as being provocative, in implying the need for software manufacture to be based on the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering.

It was hoped in 1968 that the contents of the paper would be widely distributed so that the necessities, shortcomings and trends that are found could serve as a signpost of manufacturers of computers as well as their users.

Back in 1968, there were only about 10,000 installed computers, increasing between 25-50% a year.  The rate of growth was viewed with alarm then with quotes such as:

Particularly alarming is the seemingly unavoidable fallibility of large software, since a malfunction in an advanced hardware-software system can be a matter of life and death.
We undoubtedly produce software by backward techniques 
Programming management will continue to deserve its current poor reputation for cost and schedule effectiveness until such time as a more complete understanding of the program design process is achieved. 
 And my particular favourite quote about software development

Today we tend to go on for years, with tremendous investments to find that the system, which was not well understood to start with, does not work as anticipated. We build systems like the Wright brothers built air planes — build the whole thing, push it off the cliff, let it crash, and start over again.

This embryonic stage of software engineering was an exciting time.  Engineering practises hadn't matured yet (not sure they have now!), but there was spirited discussion about the nature of software engineering.  The model of software engineering described is easily recognizable as the waterfall model.

Notice already the recognition that maintenance is about the same size as implementation and that implementation is an "error-prone translation process".

From the discussion about software engineering came many things that we see as important today.

The need for constant feedback about the system was emphasised many times.  Fraser describes the software development process in a way that sounds very familiar to agile software development.
Design and implementation proceeded in a number of stages. Each stage was typified by a period of intellectual activity followed by a period of program reconstruction. Each stage produced a usable product and the period between the end of one stage and the start of the next provided the operational experience upon which the next design was based. In general the products of successive stages approached the final design requirement; each stage included more facilities than the last. On three occasions major design changes were made but for the most part the changes were localised and could be described as ‘tuning’
The emphasis of working features after each iteration is something we (as a profession) still struggle to do today.  Breaking big features down into bite-sized stories that can be completed successful is a hard problem.

Nowadays we seem to make less distinction between design and implementation.  Techniques such as TDD (coined in 2002? also known as test-driven design) try to ensure good design (or at least good enough) by driving the design through the tests.  Dijkstra probably wouldn't TDD, but he would argue that tests (or more likely formal proofs!)  are a vital part of the design process:

... I am convinced that the quality of the product can never be established afterwards. Whether the correctness of a piece of software can be guaranteed or not depends greatly on the structure of the thing made. This means that the ability to convince users, or yourself, that the product is good, is closely intertwined with the design process itself.
Perlis summed the process of software development rather well:
1. A software system can best be designed if the testing is interlaced with the designing instead of being used after the design.
2. A simulation which matches the requirements contains the control which organizes the design of the system.
3. Through successive repetitions of this process of interlaced testing and design the model ultimately becomes the software system itself. I think that it is the key of the approach that has been suggested, that there is no such question as testing things after the fact with simulation models, but that in effect the testing and the replacement of simulations with modules that are deeper and more detailed goes on with the simulation model controlling, as it were, the place and order in which these things are done. 
So how was software design knowledge shared back then?    There was some talk about sharing of decisions (even the wrong ones to avoid having them repeated).   Naur was already aware of the need for software patterns to be established.
… software designers are in a similar position to architects and civil engineers, particularly those concerned with the design of large heterogeneous constructions, such as towns and industrial plants. It therefore seems natural that we should turn to these subjects for ideas about how to attack the design problem. As one single example of such a source of ideas I would like to mention: Christopher Alexander: Notes on the Synthesis of Form (Harvard Univ. Press, 1964)

It's interesting to read the discussion on high-level languages.  There was an overwhelming agreement that high-level languages are a good thing, but at the time I think the implementation and tools available restricted their availability.  Thankfully, this problem is fixed now (it's difficult to find people arguing against managed languages),

I found this paper an awesome read and recommend everyone reads through it.  The cynic in me enjoyed seeing that the wonderful best practises advocated today have been around for years, it's just they didn't have catchy buzzwords back then.  What I find exciting is that software engineering still hasn't found that silver bullet that improves software development by an order of magnitude.  Software engineering is going to be a challenge for the next 44 years.

I'd really like to see a version 2 of this paper; how has software engineering changed in the last 44 years?  Are there any collated experience reports?