My husband (aka Mr. Releng) and I were driving home from his office party in mid-December and had the following conversation.
Me: A publicist from O’Reilly emailed me today and asked if I would like a free copy of Making Software.
MR: Huh. Why did they offer you a free book?
Me: I don’t know. I mentioned to one of my Eclipse colleagues on Twitter that I thought it would be an interesting book to read. Also, one of the editors is Greg Wilson, who’s also the editor of the upcoming The Architecture of Open Source Applications. (I contributed a chapter on Eclipse to AOSA).
MR: Next time, mention on Twitter that you’d like a free car. What’s the book about?
MR: Free car from Twitter? Don’t get your hopes up. The book investigates the evidence that supports common software engineering practices. In medicine, a clinical trial is conducted to see if a new drug has a statistically significant effect compared to placebo or an existing treatment. The same evidence based principles can be applied to software engineering to determine if TDD or Agile methods are actually effective. I’ve always thought that there was a lot of shouting but scant evidence to support different software development practices.
MR: Yeah, undergraduate computer science is engineering. Graduate level computer science is math.
As you may have guessed, I’m married to a mathematician.
In addition to being a fan of open source, I also enjoy reading about different scientific disciplines. I love reading scienceblogs and have read many great books over the past year about evidence based medicine. So when I heard that there was a book available that examined what software engineering practices actually work, I was intrigued.
In any case, I finished reading it recently. It was very interesting and I learned a lot! The book is split into two sections. The first section dealt with different research methods. For instance,
- How to conduct a systematic literature review
- Empirical methods that can be applied to software engineering studies and why software engineering is inherently difficult to measure quantatively.
- How to decide on the papers to include in a meta-study.
The second half of the book examines the evidence for different questions in software engineering. For instance:
- How do you measure programmer productivity? Are some developers really an order of magnitude more productive than their team mates?
- Does test driven development result in better software?
- What’s the cost of discovering a bug early in the development cycle versus after the product has shipped?
- What working environment (cubicles/offices/open concept etc) are the most productive for software developers?
- Is there a measurable difference in software quality between open and closed source software?
- What are the reasons for the low proportion of women working in the software industry?
- Does the use of design patterns improve software quality?
- How do mine data from open source repositories for your own studies (Chapter 27 uses the Eclipse project as an example 🙂
All in all, I found it a very interesting book that examined the actual empirical evidence to support or refute some of the sacred cows in software engineering. I think this this is a refreshing step forward for our profession. If there aren’t numbers to support that the way we work is effective, shouldn’t we alter our path to use better methods?
Software Carpentry: Empirical results
As an aside, some great books on evidence based medicine are
Bad Science by Ben Goldacre
Trick or Treatment by Simon Singh and Edzard Ernst
Snake Oil Science by R. Barker Bausell
Thanks for the review!