## Predicting where the bugs are

Adam Tornhill’s Your Code as a Crime Scene (YCAACS) has lain open next to my laptop for several months now. I’m usually a fast reader and a technical book rarely lasts that long, unless the book is crammed with practical tips and advice that I want to try as I go along. YCAACS is no exception.

The book introduces a technique completely new to me: the mining of your code repository’s history for patterns known to correlate with code defects. For example, do the most complex modules in your project tend to become even more complex over time, suggesting that your technical debt is growing out of control? Each self-contained chapter presents a different analysis you can try out. In this post I will walk through the most simple example: correlating the number of revisions to a module with that module’s complexity.

I’ll start with one of our current internal project called romulus. We begin the analysis by extracting the repository log for the last two months, formatted in a way to make the analysis easier:

git log --pretty=format:'[%h] %aN %ad %s' --date=short --numstat --after=2016-05-01 > romulus.log


The key argument here is --numstat: this reports the number of lines added or deleted for each file. It will tell us how frequently a given file, or module, has changed during that reporting period.

Next we use the code-maat tool written by the author of YCAACS. It’s a tool that will analyse the log of a code repository and extract different summary statistics. For our example, all we want to know is how frequently each module has been changed:

maat -l romulus.log -c git -a revisions > romulus_revs.csv


Next we need to correlate those changes with the complexity of each file. We won’t be using any fancy complexity metric here: the number of lines of code will suffice. We use cloc:

cloc * --by-file --csv --quiet > romulus_sizes.csv


We now have two CSV files:

• romulus_revs.csv: the number of revisions of each file in our repository
• romulus_sizes.csv: the size of each file

By doing the equivalent of a SQL JOIN on these files, you obtain for each file its number of revisions and size. You can do this in the analysis tool of your choice. I do it in Tableau and show the result as a heatmap, where each rectangle represents a module. The size of the rectangle is proportional to the size, or complexity, of the module and its color darkness is proportional to the number of times it has changed over time. With Tableau you can hover over any of these rectangles and a window will pop-up, giving detailed information about that module:

So what does this heatmap tell me? There’s no obvious outlier here; a couple of modules in the upper right corner have recently seen a lot of change, but I know that these modules implement some of the stories we are currently working on so no surprise there. This map has, however, a tendency to become darker towards the left side, where the largest modules are shown. This suggests that some modules have been growing over time, possibly out of control. Clearly, this must be investigated and these modules should perhaps deserve more testing and/or refactoring than the average.

“Your Code as a Crime Scene” is a fantastic book. Every chapter has a couple of ideas that you can try right away on your project. I suspect this will be of most value to technical leads and testers, both of whom I consider the guardians of code quality. I’m less sure that the other developers will be able to apply the ideas from the book that easily though. Doing it properly does take time, requires a certain mind-set, and a certain familiarity with data analytics. But if your team includes someone willing and capable of doing it, I’m sure you will all benefit from it.

## How I review papers

Once you publish a paper in a journal, you are expected to regularly review papers for that journal. It’s part of the normal scientific process. Some may consider it a chore, but I see it as an opportunity to keep in touch with my field and to help quality papers get published.

When I was first asked to review a paper there was very little help available on the subject. Things have considerably improved since; for example, Elsevier maintains an Elsevier for Reviewers website with plenty of information. I recommend you start there for some basic reviewer training. But the last time I checked, that site would not yet tell you anything about how to read a paper or how to actually write a reviewer report.

Here is a workflow that works for me. Once I receive a reviewer invitation, here’s what I do:

##### Accept a reviewer invitation immediately

The whole scientific process depends on reliable and speedy reviewers. Do unto others as you would have them do to you. When I am invited to review an article, that takes priority over any other writing.

##### Read a first time generously

As soon as possible I read through the article, from beginning to end. Ideally in a single sitting, but if that’s not possible I do it in several. The goal is to form a general idea of what the article is about, how it is structured, and to prime my mind for what to look out for on the next reading.

##### Read a second time more critically

Next I re-read the article, but far slower and more critically. This is where I use GoodReader’s annotation tools: I highlight passages that I think need to be mentioned in my report; I strike through passages that I think can be omitted; I underline with squiggly lines passages that don’t read well and deserve to be changed. Sometimes I add a comment box summarising a point I don’t want to forget in my report.

When I highlight a passage I seldom record why I highlighted them. If I cannot remember why I highlighted a passage by the time I write the report, it probably wasn’t important.

##### Write the report

I don’t know how it goes for other journals, but the one I review most frequently for (Energy & Buildings) provides the reviewer with a free-form text field in which to enter their observations. (There is also a text form for private comments to the editor, but I seldom use that.) It’s important to realise that the comments from all the reviewers will be collated together and sent to the author, and sometimes also to the reviewers to notify them of the editor’s decision.

You can also include supplementary files with your review. The only time I’ve found this useful was when I needed to typeset mathematics in my review. However, I discovered that the supplementary files are not forwarded to the other reviewers, and I now avoid them.

Your report will therefore be written in plain text. I try to stick to the following template:

<express thanks and congratulations for the paper>

<summarise the paper’s main points>

<if there are major concerns about the paper, enumerate them here as a numbered list, most important ones first>

<for each section of the paper, enumerate the other (minor) suggestions/remarks as a numbered list, in the order in which they are found in the paper>

Keep in mind that the author will be required to respond to each of the reviewer’s comments. If you provide them in a numbered list you make life simpler for them.

When I write the report I go through each of my annotations, one by one, and write a comment for each of them, either to the list of minor comments or to the major ones. By the time I reach the end of the paper, all my annotations will have a corresponding comment.

I write my report in Markdown with Vim. That way I do not need to worry about getting the numbering of the comments correct; I am free to re-order my comments, especially the ones that deal with major concerns, so that the most important ones come first. When I am satisfied I run the report through pandoc, and generate a text file:

pandoc -o %:r.txt %

After a final check I copy/paste the contents of that text file into the review submission platform.

##### Language issues

To this day I’m not sure whether the reviewer or the editor is responsible for fixing typos or other language errors. These days I tend to skip them, unless I find sentences whose meaning has become completely obscure. Otherwise I usually add to my list of major concerns a sentence such as:

There are many typos and grammatical mistakes throughout the paper. For example the last sentence of the first paragraph of the Introduction reads as follows:

> … that allows for a more active participation of the demand side in the operation a control task of the power system.

or even:

The language quality of this paper does not meet the standards for
an international journal, and I found the paper very hard to follow.

In general I do not try to reformulate any passages. For many authors, English is a second language and I appreciate how hard it can be to communicate with clarity, even for native speakers. When necessary I might suggest that the authors have the paper reviewed by a native speaker.

##### Summary

That, in a nutshell, how I review papers. I know it can feel like a chore, but I strongly encourage you to participate in the process. I hope this workflow might help you get started. If you have any comments, I’d love to hear them.

## The DEBORAH project kick-off meeting

We are involved in DEBORAH, a Eurostars project nr E!10286,  led by EQUA Simulation AB, the vendor of the highly regarded IDA ICE building simulation software. Together with CSEM and Bengt Dahlgren AB, a Swedish consultancy firm specialised in buildings, the project’s stated objective is to optimise the design and operation of district thermal energy systems.

We held the project’s kick-off meeting on Thursday 16th June, 2016, in EQUA’s offices in Stockholm. Neurobat’s role in the project will consist in providing short- and long-term estimates of heating loads, and to extend IDA ICE with the Neurobat control algorithms.

A pilot site has been identified in Krokslätt, a district in the city of Göteborg, where heating to several buildings is provided by heat pumps combined with a system of boreholes: narrow shafts drilled through the rocky ground, where the water fed to the heat pumps have their temperature raised by the surrounding heat. Besides “pre-heating” the water, this also has the benefit of improving the heat pump’s coefficient of performance (COP). But few studies have been done regarding the optimal design (and operation) of such a system of boreholes, a negligence that this project hopes to address.

This 3-year long project is a great opportunity for us to work with some of the domain’s thought leaders, and to integrate IDA ICE in our own product development workflow.

## Being blocked doesn’t mean you cannot work

If you’ve been on a Scrum team for some time, you will inevitably hear someone at the stand-up say:

Today I cannot work on <some feature> because of <some reason>, but that’s all right. I’m not otherwise blocked because I can also work on <some unrelated thing>.

There are two (very human) factors at play here: 1) the desire to be seen as a productive team member, and 2) the unwillingness to deal with bad news. Admitting to being blocked can even become a taboo in some teams. Yet what is the purpose of the stand-up, if not to bring such issues out in the open?

What’s wrong with having everybody always making some kind of progress? Isn’t that indeed one of the patterns in Coplien’s Organizational Patterns of Agile Software Development? The problem is that having your work blocked while you work on something else increases the amount of work in progress, or WIP. And WIP, in a software team, is waste and costs time, effort and money. Not all work is useful; working on non-priority items, when there’s a priority item that’s not taken care of, is the worst thing you can do.

Our team discussed this point at our last retrospective. No one contested the reality of this taboo in our team, and we resolved that from now on everyone should be open about his inability to progress.

As a team member, it’s ultimately your responsibility to be on the lookout for any such pattern. It’s not the ScrumMaster’s alone. Never let a team mate hide his impediments under a carpet of busy-ness; ultimately, he, you, and the whole team will suffer.

## Was “Building Science” really the best we could come up with?

The big problem with Building Science is that we call it Building Science.

The academic study of buildings and their inhabitants is a young discipline; possibly even younger than Computer Science. The earliest articles in Building and Environment appear to date from 1966; Building Research & Information, from 1973. Like Computer Science, we have no single word for our field and are stuck with a compound. Most people seem content with Building Science, or perhaps Building Physics. The former has even been enshrined in a Wikipedia article.

But I dislike “Building Science”. I think it neither conveys the breadth of our field (ranging from the study of individual households to the optimal planning of cities) nor its depth. I find it to be both too vague and not specific enough.

But what, then, shall we call the study and science of buildings? Chemists study chemistry; biologists study biology; geologists study geology; is there an -ology that would describe our field?

I asked that question on English Language & Usage (one of my favorite Stack Exchange sites, by the way). My question didn’t quite get the attention I hoped for. I was expecting someone would come up with a nice-sounding greek root to which we could affix -ology and have a proper term, but the best we could come up with is the following:

• Oikosology, from Oikos, “house, dwelling place, habitation”
• Weikology, from the Indo-European root weik (house)

I admit I am less than enthusiastic about them. I have to confess that another reason why I started this inquiry was that, just as there’s such a thing as Computational Chemistry and Computational Biology, I wanted a two-word phrase that would mean the application of computing techniques to the study and science of buildings. But I doubt we will be seeing the Journal of Computational Oikosology anytime soon…

If you have any better proposals, feel free to post them in the comments.

## CARNOT has an official home

I’m pleased to report that CARNOT, the Simulink library of models for HVAC systems, has now an official home. You can find it by visiting its page on the MATLAB Central. On that page you will also find a link to the official releases, hosted by the Aachen University of Applied Sciences.

HVAC systems, in spite of their importance in the global energy supply and demand, remain poorly underrepresented for the MATLAB & Simulink platform. This is a problem to us (at Neurobat) because we develop new HVAC control algorithms, and few simulation environments exist that will let you define new control schemes. MATLAB & Simulink offers us the flexibility we need, but we were not able to find a well-regarded library of models for buildings and HVAC systems, until we were introduced to CARNOT.

We’re very glad that CARNOT is now back in the public light and look forward to its continued development and success.

## Linus Torvalds thinks like a chess grandmaster

I’ve uncovered evidence that Linus Torvalds, creator of Linux, may entertain a secret hobby.

An interview of Linus Torvalds in a recent issue of IEEE Spectrum had the following passage:

I’d rather make a decision that turns out to be wrong later than waffle about possible alternatives for too long.

On the surface, this sounds like your usual admonition against analysis paralysis (Wikipedia). But what Linus said echoes something that Alexander Kotov (Wikipedia), former chess grandmaster, wrote in 1971 in his Thinking like a Grandmaster (Amazon):

Better to suffer the consequences of an oversight than suffer from foolish and panicky disorder in analysis.

If I didn’t know better I would conclude that the same person wrote these two passages.

## Where all floating-point values are above average

When you just fix a programming bug quickly, you lose. You waste a previous opportunity to think and reflect on what led to this error, and to improve as a craftsman.

Some time ago, I discovered a bug. The firmware was crashing, seemingly at random. It was eventually resolved, the fix reviewed and tested, and temptation was high to just leave it at that and get on with what was next on the backlog.

This is probably how most programmers work. But it’s probably wrong. Here’s Douglas Crockford on the topic, interviewed by Scott Hanselman:

There’s a lot of Groundhog’s Day in the way that we work. One thing you can do is, every time you make a mistake, write it down. Keep a bug journal.

I wanted to give it a try. So what follows is my best recollection of how I solved the bug.

First, the observations. You cannot be a successful debugger if you are not a successful observer. My firmware wasn’t quite crashing at random. It would crash and reboot 18 times in very quick succession (less than a few minutes) following a firmware update. Once this tantrum was over it would behave normally again.

It was a new firmware version. The same firmware had been deployed on other devices, but without the same problem. So why should it happen on some devices but not all of them?

There are some useful heuristics to keep in mind when debugging. I’ve said it before, but if you don’t observe carefully and keep notes, you’re just not a good debugger. I’ve found the following heuristics useful when debugging:

1. What changed between this release and the previous one?
2. What is different between this environment and another where the failure doesn’t occur?
3. Carefully go through whatever logfiles you may have. Document anything you notice.
4. How often does the failure happen? Any discernible pattern?

In this case, the software changes introduced by this release were relatively minor and I judged it unlikely that those changes were the cause of the problem. If they were, I would expect to see the same problem on all devices.

Now when I say that something is “unlikely”, I mean of course that there must be something else that is more likely to be the real explanation. Nothing is ever unlikely by itself, and if you can remove feelings from your day-to-day work you’ll be a better engineer. But more on this in another post.

I next examined the logfiles, and noticed that the first recorded crash was not a crash. It was the normal system reboot when a new firmware was installed. The second crash was not a crash either. It was a factory reset of the system, performed by the person who updated the system to the new firmware. It’s an operation that can only be done manually, and the only crashing device was the one that had been factory-reset right after the firmware update.

So someone had logged into that device and factory-reset the system. Going through the /var/log/auth logfiles I could determine who had done it. When confronted, he confirmed that he had reset the system in order to try an improved version of our heating schedule detection algorithm.

Now there’s nothing wrong with that; but it’s well-known that bugs are more likely in the largest, most recently changed modules. The module doing heating schedule detection was relatively large, complex, and recently changed.

Now experience had shown that only two events could cause the firmware to crash and reboot:

• a watchdog reset;
• a failed assertion.

(A watchdog is a countdown timer that will reboot the system after a given timeout, typically of the order of the second. You’re supposed to manually reset the timer at regular intervals throughout your application. It’s meant to prevent the system from being stuck in infinite loops.)

At this point I went through the implementation of that algorithm very carefully, keeping an eye on anything that could be an infinite loop or a failed assertion. When I was done, I was fairly confident (i.e. could almost prove) that it would always terminate. But I also came across a section of code whose gist was the following:

float child[24]; // assume child[] is filled here with some floating-point values
float sum = 0;
float avg;
for (int i = 0; i < 24; i++)
sum += child[i];
avg = sum / 24; // compute the average of the elements of child[]

int n_above_avg = 0; // count how many elements are greater than the average
int n_below_avg = 0; // count how many elements are less than or equal to the average
for (int i = 0; i < 24; i++)
if (child[i] <= avg)
n_below_avg++;
else
n_above_avg++;
assert(n_below_avg > 0); // at least one element must be less than or equal to the average


That was the only place where an assertion was called. Could this assertion ever fail? This code calculates the average of a set of floating-point values, and counts how many elements are less than or equal to the average (n_below_avg), and how many are greater (n_above_avg). Elementary mathematics tells you that at least one element must be less than or equal to the average.

But we’re dealing with floating-point variables here, where common-sense mathematics doesn’t always hold. Could it be that all the values were greater than their average? I asked that question on Stackoverflow. Several answers came quickly back: it is indeed perfectly possible for a set of floating-point numbers to all be above their average.

In fact, it’s easy to find such a set of numbers if they are all the same. One respondent gave a list of floating-point values that, when averaged, turned out to all be greater than their average. For example:

#include <iostream>
#include <cassert>

using namespace std;

int main() {
int sz = 24;
int i;
double values[sz];
for (i = 0; i < sz; i++) values[i] = 0.108809;
double avg;
for (avg = 0, i = 0; i < sz; i++) avg += values[i];
avg /= sz;
assert (values[0] > avg);
return 0;
}


Once the root cause of the problem was identified, it was relatively easy to write a failing unit-test and implement a solution.

Well, that’s the news from the world of programming where all the floating-point values can be above average. Who said Lake Wobegon was pure fiction?

## The one question not to ask at the standup meeting

What is the very first question one is supposed to answer during a standup meeting? If your answer is:

What did you do since the last standup?

then congratulations. You have given the canonical answer recommended by Mike Cohn himself. But I am now convinced that this is the wrong question to ask.

When you ask someone What did you do?, you are inviting an answer along the lines of:

I worked on X.

The problem with this answer is that depending on X, you really don’t know what the team member has achieved. Consider the following possibilities, all perfectly reasonable answers to the question:

I worked on the ABC-123 issue and it is going well.

I worked on some unit tests for this story.

I worked with [team member Y].

You simply cannot tell if any progress is being made. Sure, you can ask for clarifying questions, but this will prolong the standup. Instead, I wish to suggest a slightly different version of that first question:

What did you get done since the last standup?

Here the emphasis is on what work was completed, not on what has been “worked” on. The deliverable becomes the object of the conversation, not the activity. The answers above don’t answer the question anymore, and this is what you might instead hear:

I tested and rejected 3 hypotheses for the cause of the ABC-123 issue, but I can think of at least 2 more.

I wrote a custom function for testing object equality and converted some unit tests to use it.

I paired with [team member Y] and we […]

Ambiguity and vagueness during the standups have regularly been an issue for our own team, and I am sure we are not the only ones. If you have fallen into the habit of asking the first version of this question, consider trying the second version and let me know (in the comments below) how that works out for you.

## The opinionated estimator

You have been lied to. By me.

I taught once a programming class and introduced my students to the notion of an unbiased estimator of the variance of a population. The problem can be stated as follows: given a set of observations $(x_1, x_2, …, x_n)$, what can you say about the variance of the population from which this sample is drawn?

Classical textbooks, MathWorld, the Khan Academy, and Wikipedia all give you the formula for the so-called unbiased estimator of the population variance:

$$\hat{s}^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i – \bar{x})^2$$

where $\bar{x}$ is the sample mean. The expected error of this estimator is zero:

$$E[\hat{s}^2 – \sigma^2] = 0$$

where $\sigma^2$ is the “true” population variance. Put another way, the expected value of this estimator is exactly the population variance:

$$E[\hat{s}^2] = \sigma^2$$

So far so good. The expected error is zero, therefore it’s the best estimator, right? This is what orthodox statistics (and teachers like me who don’t know better) will have you believe.

But Jaynes (Probability Theory) points out that in practical problems one does not care about the expected error of the estimated variances (or of any estimator for that matter). What matters is how accurate this estimator is, i.e. how close it is to the true variance. And this calls for an estimator that will minimise the expected squared error $E[(\hat{s}^2 – \sigma^2)^2]$. But we can also write this expected squared error as:

$$E[(\hat{s}^2 – \sigma^2)^2] = (E[\hat{s}^2] – \sigma^2)^2 + \mathrm{Var}(\hat{s}^2)$$

The expected squared error of our estimator is thus the sum of two terms: the square of the expected error, and the variance of the estimator. When following the cookbooks of orthodox statistics, only the first term is minimised and there is no guarantee that the total error is minimised.

For samples drawn from a Gaussian distribution, Jaynes shows that an estimator that minimises the total (squared) error is

$$\hat{s}^2 = \frac{1}{n+1}\sum_{i=1}^n (x_i – \bar{x})^2$$

Notice that the $n-1$ denominator has been replaced with $n+1$. In a fit of fancifulness I’ll call this an opinionated estimator. Let’s test how well this estimator performs.

First we generate 1000 random sets of 10 samples with mean 0 and variance 25:

samples <- matrix(rnorm(10000, sd = 5), ncol = 10)


For each group of 10 samples, we estimate the population variance first with the canonical $n-1$ denominator. This is what R’s built-in var function will do, according to its documentation:

unbiased <- apply(samples, MARGIN = 1, var)


Next we estimate the population variance with the $n+1$ denominator. We take a little shortcut here by multiplying the unbiased estimator by $(n-1)/(n+1)$, but it makes no difference:

opinionated <- apply(samples, MARGIN = 1, function(x) var(x) * (length(x) - 1) / (length(x) + 1))


Finally we combine everything in one convenient data frame:

estimators <- rbind(data.frame(estimator = "Unbiased", estimate = unbiased),
data.frame(estimator = "Opinionated", estimate = opinionated))

histogram(~ estimate | estimator, estimators,
panel = function(...) {
panel.histogram(...)
panel.abline(v = 25, col = "red", lwd = 5)
})


It’s a bit hard to tell visually which one is “better”. But let’s compute the average squared error for each estimator:

aggregate(estimate ~ estimator, estimators, function(x) mean((x - 25)^2))

##     estimator estimate
## 1    Unbiased 145.1007
## 2 Opinionated 115.5074


This shows clearly that the $n+1$ denominator yields a smaller total (squared) error than the so-called unbiased $n-1$ estimator, at leat for a sample drawn from a Gaussian distribution.

So do your brain a favour and question everything I tell you. Including this post.