How to fix rotation problems with iPhone pictures

When I take a picture with my vertically-held iPhone, here is what happens when I insert it as-is in this blog:

Wrongly rotated iPhone picture

But the picture shows up correctly when I open it in any OSX application, such as Preview. The issue is that when you take a picture with your iPhone, a meta-data tag gets written to the file telling OSX how to rotate the picture when it is displayed. You can see the tag by using the inspector in Preview:

Inspector data for iPhone picture

The offender here is that Orientation tag, which seems to be used only by OSX applications. The best way to fix this is to remove the tag, rotate the picture correctly with Preview, and save it again.

To remove the tag, I recommend using a tool called ExifTool. It’s a neat command-line tool that you can download here. Once downloaded, removing the tag is a simple as this:

$ exiftool -Orientation= filename.jpeg

This replace filename.jpeg with the same file but with the tag removed, and save a copy of the original file as filename.jpeg.original. Give it a try, I really recommend it.

Posted on November 30, 2015 at 10:00 am by David Lindelöf · Permalink · Leave a comment
In: Uncategorized

Reviewer queue

During a recent sprint retrospective we raised a problem with the way we assign code reviews. Not the formal, whole-team ones, but the regular ones we solicit for each pull request.

The problem was that we tend to select our reviewers based on various subjective criteria, including how well we like the person. I admit I am guilty of this myself. What’s more, during the discussion it became clear that my own help in reviewing code was not asked as often as it used to.

At Neurobat we currently have a rule that all pull requests must be reviewed by two other team members (one, if the pull request was paired on). To ensure these reviewers are selected fairly and without subjectivity, we have now introduced a reviewer queue: our names are listed on the main whiteboard and an arrow is drawn, showing who is next in the review queue. When a reviewer is assigned, the arrow moves to the next name.

Neurobat reviewer queue

We’ve had this in place for a couple of sprints now and the results have been very satisfying:

An added benefit for myself is that by explicitly putting my name among the review queue, I announce my willingness to participate in the reviewing process as much as anyone else. As a result, I’ve been reviewing much more code this last couple of weeks than ever before.

If you have a problem in the selection of reviewers in your own team, do consider setting up a review queue and let me know whether that works out for you.

Posted on November 27, 2015 at 10:00 am by David Lindelöf · Permalink · Leave a comment
In: Uncategorized

How to test for floating point exceptions with CppUTest

Some programmers, when confronted with a problem, think “I know, I’ll
use floating point arithmetic.” Now they have 1.999999999997 problems.
// Tom Scott

Floating point arithmetic is notoriously hard to get right. I consider writing a bug-free, optimally performant numeric library to be approximately as hard as writing a compiler. Fortunately, most programmers don’t need to deal with it, unless your work involves anything to do with science or engineering.

There’s one subject though where I think you need to be a bit more careful. This is about understanding when and why your program will catch floating point exceptions (FPE). Let’s consider a couple of examples.

Consider first this program

public class FPE {
  public static void main(String[] args) { 
    int i = 0; 
    System.out.println("1 / 0 = " + (1 / i));

Compiling it and running it yields:

$ javac
$ java FPE
Exception in thread "main" java.lang.ArithmeticException: / by zero at FPE.main(

In Java, dividing an integer by zero yields an ArithmeticException. Fair enough. What about floating points?

public class FPE2 { 
  public static void main(String[] args) { 
    double i = 0; 
    System.out.println("1 / 0 = " + (1 / i));

Now this yields something different:

$ javac
$ java FPE2
1 / 0 = Infinity

I’m not sure I like having such a wildly different behavior. But consider now the same programs in C:

#include <stdio.h>

int main() {
  int i = 0;
  printf("1 / 0 = %d\n", 1 / i);

This is the result (under OSX):

$ gcc -o FPE FPE.c
$ ./FPE
Floating point exception: 8

Not exactly the most helpful error message ever, but at least the program crashes. Now the same thing with doubles:

#include <stdio.h>

int main() { 
  double i = 0.;
  printf("1 / 0 = %g\n", 1 / i);

And here’s the result:

$ gcc -o FPE2 FPE2.c
$ ./FPE2 
1 / 0 = inf

So Java and C behave similarly: dividing an integer by zero crashes the program, but dividing a double by zero does not. I find it rather unsettling that 1 / 0 should result in a completely different program than 1 / 0.. I realise now that I had assumed all divisions by zero would be caught at runtime and cause the program to fail. This is, however, simply not true.

Our code at Neurobat includes a fair amount of numeric algorithms, which are decently covered by our unit tests. However, there remained the small possibility that the code could execute “illegal” floating point operations and silently fail.

There is no portable way to force a program to crash when a floating point exception is raised. You need to make sure that floating point exceptions cause a SIGFPE signal to be sent to your program. Only google can help you here, but for OSX here is how you do it.

What you can do in a portable way is to test if a floating point exception was raised, and I highly recommend that you check for most floating-point exceptions in your unit tests. I say “most”, because you probably don’t need to test for FE_INEXACT. See the manpage for fenv for details.

Here is how we do it in the CppUTest framework. You need to test for exceptions before and after running your unit tests. We use plain assertions because CppUTest doesn’t like that we use its assertions outside of a test run.

#include "CppUTest/CommandLineTestRunner.h"

#include <cassert>
#include <fenv.h>

void assert_no_fpe_raised(void) {
  assert(0 == fetestexcept(FE_INVALID) && "Invalid floating-point exception raised during tests.");
  assert(0 == fetestexcept(FE_DIVBYZERO) && "Division by zero raised during tests.");
  assert(0 == fetestexcept(FE_OVERFLOW) && "Overflow raised during tests.");
  assert(0 == fetestexcept(FE_UNDERFLOW) && "Underflow raised during tests.");
  assert(0 == fetestexcept(FE_DENORMALOPERAND) && "Denormal operand raised during tests.");
  assert(0 == fetestexcept(FE_ALL_EXCEPT & ~FE_INEXACT) && "Floating-point exceptions (other than inexact) raised during tests.");

int main(int argc, char** argv) {
  int result;
  assert(0 == fetestexcept(FE_ALL_EXCEPT) && "Floating-point exceptions active before tests begin.");
  result = RUN_ALL_TESTS(argc, argv);
  return result;

So did we ever catch any bug with this? Indeed we did. We use an off-the-shelf optimisation algorithm that minimises an objective function in an $N$-dimensional space. At each iteration, the algorithm needs to compute the middle between two points where the objective function is to be evaluated. It does this by taking the mean of the points’ coordinates, in the naive way: $x’ = \frac{x1 + x2}{2}$. What we found was, that if $x1$ or $x2$ is large enough, their sum could overflow. What’s worse, the program would not terminate or fail in any visible way, but just return rubbish.

Bottom line is that if your program does any kind of floating point computation, consider having your unit test framework check for floating point exceptions. It probably won’t do it by default.

Posted on November 25, 2015 at 10:00 am by David Lindelöf · Permalink · Leave a comment
In: Uncategorized

Not prioritising architectural needs

From Mike Cohn’s User Stories Applied, there was this little paragraph that I think many teams (including my own) tend to forget about:

Developer Responsibilities

You are responsible for providing information (sometimes including your underlying assumptions and possible alternatives) to the customer in order to help her prioritize the stories.

You are responsible for resisting the urge to prioritize infrastructural or architectural needs higher than they should be.

Indeed, on a team with technically strong members you will sometimes see proposals for stories such as:

Define our services’s API

As a developer, I want a clear and stable API so that I can develop the client-side code more effectively.

This story has all the virtues of a well-written user story: a clear title, a clear stakeholder, yet left intentionally vague to make sure that people will speak among themselves about it. Yet something is wrong here.

The problem is that the story brings value neither to the business nor to the users. It is part of a larger story; it is a task, or a TODO item, masquerading as a user story. It is a (no doubt well-intentioned) attempt at breaking down a larger story into small ones. But it doesn’t work.

It doesn’t work because once it is done, you are worse off than when you began. How is this possible? It’s possible because you now own software that is neither finished nor potentially shippable. It is by definition unfinished work, and chances are that the mass of unfinished work will only grow over time. Unfinished work is like inventory: it is waste and it costs money.

Instead, it is your responsibility to gently nudge the team towards what’s sometimes known as a Walking Skeleton, i.e. a system that implements a small functionality end-to-end. Try hard to achieve this, and be prepared against any objections the team may have. The benefits are immense, and experience has shown that the resulting system will be better designed and easier to test.

Posted on November 23, 2015 at 10:00 am by David Lindelöf · Permalink · Leave a comment
In: Uncategorized

Bayesian tanks

The frequentist vs bayesian debate has plagued the scientific community for almost a century now, yet most of the arguments I’ve seen seem to involve philosophical considerations instead of hard data.

Instead of letting the sun explode, I propose a simpler experiment to assess the performance of each approach.

The problem reads as follows (taken from Jaynes’s Probability Theory):

You are traveling on a night train; on awakening from sleep, you notice that the train is stopped at some unknown town, and all you can see is a taxicab with the number 27 on it. What is then your guess as to the number N of taxicabs in the town, which would in turn give a clue as to the size of the town?

In different setting, this problem is also known as the German tank problem, where again the goal is to estimate the total size of a population from the serial number observed on a small sample of that population.

The frequentist and bayesian approaches yield completely different estimates for the number N. To see which approach gives the most satisfactory estimates, let’s generate a large number of such problems and see the error distribution that arise from either approach.

n.runs <- 10000
N.max <- 1000
N <- = N.max, size = n.runs, replace = TRUE)
m <- sapply(N, sample, size = 1)

We run this experiment n.runs times. Each time we select a random population size N uniformly drawn between 1 and N.max. We draw a random number m between 1 and N, representing the serial number that is observed.

The frequentist approach gives the following formula for estimating $N$: $\hat{N} = 2m-1$. Intuitively, the argument runs that the expected value for $m$ will be $N/2$. $m$ is therefore our best estimate for half of $N$, and hence, our best estimate for $N$ will be twice $m$. And I’m not sure where the ${}-1$ thing comes from.

The bayesian approach is more complex and requires one to provide an estimate for the maximum possible number of taxicabs. Let’s therefore assume that we know that the total number of cabs will not be larger than 1000. (The frequentist approach cannot use this information, but to make a fair comparison we will cap the frequentist estimate at 1000 if it is larger.)

Then the bayesian estimate is given by $\hat{N} = (N_\textrm{max} +1 – m) / \log(N_\textrm{max} / (m – 1))$.

Putting it all together, we create a data frame with the errors found for both approaches:

frequentist <- pmin(m * 2 - 1, N.max.bayes) - N
bayesian <- (N.max.bayes + 1 - m) / log(N.max.bayes / (m - 1)) - N
errors <- rbind(data.frame(approach = "FREQ",
                           errors = frequentist),
                data.frame(approach = "BAYES",
                           errors = bayesian))

The mean square error for each approach is then given by:

> by(errors$errors^2, errors$approach, mean)
errors$approach: FREQ
[1] 73436.81
errors$approach: BAYES
[1] 44878.61

The bayesian approach yields close to half the square error of the frequentist approach. The errors can also be plotted:

histogram(~ errors | approach, errors)

Taxicabs errors

Both error distributions are skewed towards negative values, meaning that both approaches tend to underestimate $N$. However, the bayesian errors have a tighter distribution around 0 than the frequentist ones.

The bottom line is that, given exactly the same information, the bayesian approach yields estimates whose squared error is about 60% that of the frequentist approach. For this particular problem, there is no question that the bayesian approach is the correct one.

Posted on November 20, 2015 at 10:00 am by David Lindelöf · Permalink · Leave a comment
In: Uncategorized

ISH 2015 — first impressions

ISH, held every two years in Frankfurt, describes itself as “The world’s leading trade fair The Bathroom Experience, Building, Energy, Air-conditioning Technology, Renewable Energies”. At Neurobat we develop systems for improved and more efficient indoor climate control systems, and it was only natural that we attend as visitors.


A small party from our company visited the fair, which was spread out over 12 halls according to topics. Each of these halls would have easily required at least half a day do it proper justice, so it was obviously not possible to visit the entire fair.

My professional interests made me focus on two domains: heating systems and control systems. Here are some key observations, together with some pictures I took:

Internet connectivity

This was a recurring theme in the heating systems hall. Every single heating system manufacturer seemed to have a solution to connect their system to the “Internet Of Things”.

There appears to be two main benefits from having your system connected to the web: remote control and remote maintenance. Remote control is all about having the possibility (usually through some app) to control your house remotely. Remote maintenance is more aimed at the installer, who will have the possibility to remotely monitor, and proactively intervene on, your system.

It is hard to determine if this is a fad or a long-term trend, but I am very excited by the latter possibility.

Lack of awareness about advanced control algorithms

It is fairly well known that the standard weather-compensated heating control systems in wide use today deliver a suboptimal energy performance. When I visited the control systems hall, I was looking forward to finding proposals for more advanced control algorithms.

I was therefore a bit disappointed to find no such offer. Manufacturers of control systems appear to have made a lot of progress in making their systems easier to program and configure, with easy-to-use graphical programming interfaces, but when you drill down into their library of standard components you always find the good old heating curve.

That being said, I did visit a few booths and asked how open, as a manufacturer, they were to letting third-parties provide add-on components to their library of elements. I was pleasantly surprised to learn that more often than not, the response was positive.

Posted on March 16, 2015 at 9:59 am by David Lindelöf · Permalink · Leave a comment
In: Uncategorized

How not to get hired by Neurobat

When I recruit software engineers I always ask them to first take a short online programming test. Following a recommendation from Jeff Atwood, we use Codility as an online programming testing tool.

The goal of this test is not to assess whether you are a good programmer. I believe there’s more to software engineering than merely being able to code a simple algorithm under time pressure. The goal is to filter out self-professed programmers who, in fact, can’t program. And according to Jeff Atwood again, these people are uncomfortably numerous.

During our current recruitment round we got an angry email from a candidate who performed less than stellarly:

Thank you for your e-mail, outlining that you don’t wish to proceed further with my application.

I fully understand your position, though I feel that your online testing system is flawed. I have been programming C and C++ on and off for 25 years, so I guess if I don’t know it, then nobody does.

It’s simply not realistic to test people under such artificial conditions against the clock, relatively unprepared and in a strange development environment.

Nevertheless, I’m glad to have experienced the test, and it has helped resolve my focus on exactly the type of jobs that I don’t wish to pursue, and the types of people I don’t wish to work with.

This is from a candidate who, according to his resume, is an “experienced IT professional” with 10+ years of experience in C/C++, Javascript, Perl, SQL, and many others. Let’s have a look at the programming test and his solution.

The test consists in two problems, rated “Easy” and “Medium” respectively by the Codility platform. The candidates have one hour to carry out the test. They can take the test only once, but whenever they want. They are given the opportunity to practice first.

Here is the gist of the first, “Easy” problem:

Write a function int solution(string &S); that, given a non-empty string S consisting of N characters, returns 1 if S is an anagram of some palindrome and returns 0 otherwise.

For example, "dooernedeevrvn" is an anagram of the palindrome "neveroddoreven". A minute of reflexion should be enough to realise that a string is an anagram of a palindrome if and only if not more than one letter occurs an odd number of times.

Here is Mr. If-I-don’t-know-it-nobody-does’s solution in toto:

// you can use includes, for example:
// #include <algorithm>
#include <iostream>
#include <vector>

using namespace std;

// you can write to stdout for debugging purposes, e.g.
// cout << "this is a debug message" << endl;

int solution(string &S) {
    // --- string size ---
    int N = S.size();
    char *str;
    bool even;
    vector<int> cnt(N,0);
    // --- even no of letters? ---
    if (N % 2)
       even = false;
       even = true;
    // --- for faster access ---
    str = (char *)S.c_str();

    // --- count each letter occurence ---
    // --- for each letter and check letter count of all others ---
    for (int i=0;i<N;i++)
        //cout << "checking " << i << str[i]<<'n';

        // --- exists at least once ---

        // --- check all other positions ---
        for (int j=0;j<N;j++)
            if (j==i)
            if (str[i] == str[j])
            //cout << "found match" << 'n';
    // --- if length even, chk all letter count is even ---
    if (even)
       for (int i=0;i<N;i++)
           // --- if odd count found, then no palindromes ---
           if (cnt[i] % 2)
              return 0;
       // --- all letters are even ---
       return 1;
    // --- if odd chk only one letter has odd count ---
       int l_odd_cnt=0;
       for (int i=0;i<N;i++)
           // --- letter appears odd number of times ---
           if (cnt[i] %2)
       // --- more than one odd letter count found ---
       if (l_odd_cnt == 1)
          return 1;
    return 0;  

Never mind that this solution has O(n2) time complexity and O(n) space complexity (the test asked for O(n) and O(1) respectively), it is also wrong. It returns 0 for “zzz”. But perhaps the use of C-style char* “for faster access” will compensate for the algorithmic complexity.

Let’s have a look at another solution proposed by a self-titled senior programmer:

// you can use includes, for example:
// #include <algorithm>

// you can write to stdout for debugging purposes, e.g.
// cout << "this is a debug message" << endl;
#define NUM_ALPH 30
#define a_ASCII_OFFSET 97

int solution(string &S) {
    // write your code in C++11
    //std::map<char,int> letters_to_counts;
    //std::map<char,int>::iterator it;
    //int len = S.size;
    //string alph = "abcdefghijklmnopqrstuvwxyz";
    int count[NUM_ALPH]={};  // set to 0
    for(int i==0; i< len; i++)
        char ch= S[i];
        int index= (int)ch;  // cast to int 
        count[index-a_ASCII_OFFSET]^=1;  //toggles bit, unmatched will have 1,
    int sum_unmatched=0;
    for(int i=0; i < NUM_ALPH; i++)
    if(sum_unmatched<=1)return 1;
    return 0;
// did not have time to polish but the solution logic should work

This one doesn’t even compile, but fortunately the “logic should work”. I’m sure it will, being written as it is in C, and with helpful comments too (“cast to int”, really?)

I have several more examples like this one, all coming from candidates who applied to a job ad where I made the mistake to ask for a Senior Software Engineer.

Compare this with a contribution from one who applies to a non-senior position:

#include <algorithm>
#include <map>

map<char, int> createDictionary(string & S) {
    map<char, int> result;
    for (char ch : S) {
        ++ result[ch];
    return result;

int solution(string & S) {
    map<char, int> dictionary = createDictionary(S);
    int numEvens = count_if(dictionary.begin(), dictionary.end(), 
        [] (const pair<char, int> & p) { return p.second % 2 == 1; });
    return numEvens < 2? 1: 0;

Not only is this code correct, it also reads well and demonstrates knowledge of the recent additions to the C++ language. And this comes from a relatively younger candidate, who came as far as the in-person interview.

Again, software engineering is about much more than merely programming skills. This test is only the first filter; when the candidates are invited for the interview I ask them to explain their reasoning and their code to a non-programmer, to see how their communication skills stack up. Only then will we consider making them an offer.

Posted on October 16, 2014 at 9:53 am by David Lindelöf · Permalink · 12 Comments
In: Uncategorized

Review: Growing Object-Oriented Software, Guided by Tests

Growing Object-Oriented Software, Guided by TestsGrowing Object-Oriented Software, Guided by Tests by Steve Freeman

I didn’t know what to expect when I picked up this book. In spite of its excellent reviews I feared it was going to be another redundant addition to the mountain of books harping on the virtues of Test-Driven Development (TDD), without adding anything significant to the standard sermon.

Nothing could be further from the truth.

I read a fair share of technical books, but this book is the only one in years that I immediately began to re-read again after finishing. It is easily one of the most important books on software engineering out there, and is likely to remain so for some time to come.

The authors present what is now known as the London school of TDD, where the correctness of an object is defined by its interactions with its collaborators, not necessarily by its state. Although I had seen mocking frameworks in action before, never had I seen one being used throughout the development of a software project.

Another fascinating idea is the notion of writing an end-to-end test first, before even starting to write unit tests. We have been so thoroughly drilled on the virtues of fast tests, that it doesn’t occur to us anymore that it’s even possible—even preferable—to exercice the whole system, perhaps in a separate test suite.

But the best part of the book is the sample project used to illuminate these concepts. It consists in writing a desktop application with which a user can automate the process of bidding in online auctions. The graphical part is done with the Swing framework in Java, and the application talks via XMPP to the auction house. The first chapter in the case study is about setting up literally an end-to-end test, i.e. a test (written with JUnit) that will verify if the graphical display matches the XMPP communications.

From there on, the case study proceeds with the implementation of feature after feature, always following the same pattern: write the end-to-end test first, implement the feature with TDD, refactor, repeat.

No book is worth reading if it doesn’t change your approach to your existing projects. This one showed me immediately where our current project (an embedded system for energy management) was lacking in terms of testing.

Go read this book, and send me flowers and chocolates.

Posted on October 6, 2014 at 9:56 am by David Lindelöf · Permalink · Leave a comment
In: Book reviews

How to determine if a sample is drawn from a normal distribution

Suppose you’ve performed some experiment on a given population sample. Each experiment yields a single numeric result. You have also derived the usual statistics (say, the sample mean and the sample standard deviation). Now you want to draw inferences about the rest of the population. How do you do that?

I was surprised the other day to learn that there’s an ISO norm for that. However, life gets much simpler if you can assume that the parent population is normally distributed. There are several ways to check this assumption, and here we’ll cover what I believe are two of the easiest yet most powerful ones: first an informal, graphical one; then a formal, statistical one.

The graphical method is called a (normal) Q-Q plot. If your sample is normally distributed then the points in a normal Q-Q plot will fall on a line.

Here is a vector of measurements that I’ve been working with recently. (Never mind what these represent. Consider them as abstract data.)

> x
[1] 20.539154 -1.314532 4.096133 28.578643 36.497943 12.637312 6.783382 18.195836 15.464364 20.155207

The command to produce a normal Q-Q plot is included in R by default:

> qqnorm(x)
> qqline(x, col=2)

Note that I also call qqline() in order to draw a line through the 25% and 75% quantiles. This makes it easier to spot significant departures from normality. Here is the result:


No nomination for best linear fit ever, but nothing either to suggest non-normality.

Now for the statistical test. There are actually a lot of statistical tests for non-normality out there, but according to Wikipedia the Shapiro-Wilk test has the highest power, i.e. the highest probability of detecting non-normality on non-normally-distributed data. (I hope I’m getting this right or my statistician friends will tan my hide.)

This test is built-in to R with the shapiro.test() function:

> shapiro.test(x)

    Shapiro-Wilk normality test
data: x 
W = 0.9817, p-value = 0.9736

You probably have a part of your brain trained to release endorphins when it sees a p-value lower than 0.05, and to trigger a small depression when the p-value is higher than 0.9. But remember what it is we are testing for here. What is the null hypothesis here?

Here, the null hypothesis is that the data is normally distributed. You might find this counter-intuitive; for years, you have been trained into thinking that the null hypothesis is the thing you usually dont’t want to be true. But here it is the other way around: we want to confirm that the data is normally distributed, so we apply tests that detect non-normality and therefore hope the resulting p-value will be high. Here, any p-value lower than, say, 0.05 will ruin your day.

So we have determined both graphically and numerically that there is no evidence for non-normality in our data. We can therefore state that to the best of our knowledge, there is no evidence that the data comes from anything else than a normal distribution.

(Ever noticed how much statisticians love double-negatives?)

Posted on September 25, 2014 at 9:35 am by David Lindelöf · Permalink · Leave a comment
In: Uncategorized

MATLAB Coding Conventions

Over the course of four years we have developed, at Neurobat, a set of coding conventions for MATLAB that I would like to share here. The goal of these conventions is three-fold:

Feel free to redistribute and/or adapt these rules to suit your organization.


We have observed that scientists and engineers who use MATLAB tend to write MATLAB code that mirrors their way of thinking: long scripts that perform computations as a series of steps.

Our experience has shown that code written in that style tends to become hard to understand and to modify. Furthermore, it tends also to be hard to port to C. As an alternative, we suggest that both MATLAB and C programs will benefit from the application of the so-called Opaque Data Type programming idiom. Our experience has shown that a disciplined application of this idiom leads to more modular, cleaner code that is also easier to port to C.

In the rest of this article we enumerate the rules that should be followed to apply this idiom to the MATLAB language.

Represent an object with state as a struct

Neither C nor MATLAB has (a satisfying) support for object-oriented programming; however, some degree of encapsulation can be achieved by using structs, which both C and MATLAB support.

We have found structs to be the best way to represent state in MATLAB. The alternatives, namely global variables, or persistent variables, are effectively global variables and cannot be used to represent state held by more than one object.

Provide a meaningful name to the structure

The state-holding structure should represent some kind of object in the real world; provide a name for this structure, so we can understand the purpose of this object.

Represent a module by a folder 

Keep all the code related to a particular data structure (constructor, methods and unit tests) under the same folder, with the same name as the structure. The C language lets you implement all functions in the same file, usually called a module. MATLAB requires each (public) function to be defined as the first function in their own .m file. Keep all those .m files in the same folder.

Never expect the client code to access fields directly

No code, except the methods defined in the enclosing folder, is expected (or allowed) to access the fields of the structure directly.

Define a constructor

Never expect the client code to build the struct itself; always provide a suitable function, called a constructor, that will instantiate the proper fields in the structure. The client code should never even be aware that they are dealing with a structure.

Keep a consistent naming convention for functions

C has no namespace, and neither has MATLAB. It is therefore important to adhere to a naming convention for functions. Keep the following naming convention, where xxx is the name of the enclosing folder:

Constructor: xxx_new(...)

Methods: xxx_method_name(xxx, ...)

Destructor (if needed): xxx_free(xxx)

Methods, including the constructor, may accept optional arguments. The first argument to all methods should be an instance of xxx, on which it is understood that the operations will apply.

Keep the Command-Query Separation principle

The Command-Query Separation principle states that a method should either return a computed value, or update the state of the object, but not both. Keep this principle unless doing so would obviously lead to less readable and less maintainable code.

Unit tests

We believe that the practice of Test-driven development leads to better software. We are however aware that applying this practice requires training and discipline. We therefore strongly encourage it for code provided by third parties, without (yet) requiring it. Internally developed code is almost always test-driven.

Code Quality

We understand that producing quality code requires experience, training and discipline. It would be unreasonable to expect the same code quality from scientists and engineers as from professional software craftsmen; however, we encourage you to remain alert to the following signs of deteriorating quality:


This is an example of how a simple PI controller could be implemented, following the guidelines above. Put the three files below under a pid folder, together with test data and test functions:

function pid = pid_new(setpoint, P, I)
pid.setpoint = setpoint;
pid.P = P;
if nargin < 3
  pid.I = 0;
  pid.I = I;
pid.error = 0;
pid.ui = 0;
function pid = pid_new_value(pid, new_value)
pid.error = pid.setpoint - new_value;
pid.ui = pid.ui + pid.error * pid.I;
function control = pid_control(pid)
control = pid.P * (pid.error + pid.ui);
Posted on September 22, 2014 at 9:52 am by David Lindelöf · Permalink · Leave a comment
In: Uncategorized