Workbooks for Benchmarking

Posted on Wed 21 February 2018 by alex in general

While working on a major re-factor of QEMU's softfloat code I've been doing a lot of benchmarking. It can be quite tedious work as you need to be careful you've run the correct steps on the correct binaries and keeping notes is important. It is a task that cries out for scripting but that in itself can be a compromise as you end up stitching a pipeline of commands together in something like perl. You may script it all in a language designed for this sort of thing like R but then find your final upload step is a pain to implement.

One solution to this is to use a literate programming workbook like this. Literate programming is a style where you interleave your code with natural prose describing the steps you go through. This is different from simply having well commented code in a source tree. For one thing you do not have to leap around a large code base as everything you need is on the file you are reading, from top to bottom. There are many solutions out there including various python based examples. Of course being a happy Emacs user I use one of its stand-out features org-mode which comes with multi-language org-babel support. This allows me to document my benchmarking while scripting up the steps in a variety of "languages" depending on the my needs at the time. Let's take a look at the first section:

1 Binaries To Test

Here we have several tables of binaries to test. We refer to the
current benchmarking set from the next stage, Run Benchmark.
For a final test we might compare the system QEMU with a reference
build as well as our current build.
Binary title
/usr/bin/qemu-aarch64 system-2.5.log
~/lsrc/qemu/qemu-builddirs/arm-targets.build/aarch64-linux-user/qemu-aarch64 master.log
~/lsrc/qemu/qemu.git/aarch64-linux-user/qemu-aarch64 softfloat-v4.log

Well that is certainly fairly explanatory. These are named org-mode tables which can be referred to in other code snippets and passed in as variables. So the next job is to run the benchmark itself:

2 Run Benchmark

This runs the benchmark against each binary we have selected above.

import subprocess
import os

runs=[]

for qemu,logname in files:
cmd="taskset -c 0 %s ./vector-benchmark -n %s | tee %s" % (qemu, tests, logname)
    subprocess.call(cmd, shell=True)
    runs.append(logname)

    return runs

So why use python as the test runner? Well truth is whenever I end up munging arrays in shell script I forget the syntax and end up jumping through all sorts of hoops. Easier just to have some simple python. I use python again later to read the data back into an org-table so I can pass it to the next step, graphing:

set title "Vector Benchmark Results (lower is better)"
set style data histograms
set style fill solid 1.0 border lt -1

set xtics rotate by 90 right
set yrange [:]
set xlabel noenhanced
set ylabel "nsecs/Kop" noenhanced
set xtics noenhanced
set ytics noenhanced
set boxwidth 1
set xtics format ""
set xtics scale 0
set grid ytics
set term pngcairo size 1200,500

plot for [i=2:5] data using i:xtic(1) title columnhead

This is a GNU Plot script which takes the data and plots an image from it. org-mode takes care of the details of marshalling the table data into GNU Plot so all this script is really concerned with is setting styles and titles. The language is capable of some fairly advanced stuff but I could always pre-process the data with something else if I needed to.

Finally I need to upload my graph to an image hosting service to share with my colleges. This can be done with a elaborate curl command but I have another trick at my disposal thanks to the excellent restclient-mode. This mode is actually designed for interactive debugging of REST APIs but it is also easily to use from an org-mode source block. So the whole thing looks like a HTTP session:

:client_id = feedbeef

# Upload images to imgur
POST https://api.imgur.com/3/image
Authorization: Client-ID :client_id
Content-type: image/png

< benchmark.png

Finally because the above dumps all the headers when run (which is very handy for debugging) I actually only want the URL in most cases. I can do this simply enough in elisp:

#+name: post-to-imgur
#+begin_src emacs-lisp :var json-string=upload-to-imgur()
  (when (string-match
         (rx "link" (one-or-more (any "\":" whitespace))
             (group (one-or-more (not (any "\"")))))
         json-string)
    (match-string 1 json-string))
#+end_src

The :var line calls the restclient-mode function automatically and passes it the result which it can then extract the final URL from.

And there you have it, my entire benchmarking workflow document in a single file which I can read through tweaking each step as I go. This isn't the first time I've done this sort of thing. As I use org-mode extensively as a logbook to keep track of my upstream work I've slowly grown a series of scripts for common tasks. For example every patch series and pull request I post is done via org. I keep the whole thing in a git repository so each time I finish a sequence I can commit the results into the repository as a permanent record of what steps I ran.

If you want even more inspiration I suggest you look at John Kitchen's scimax work. As a publishing scientist he makes extensive use of org-mode when writing his papers. He is able to include the main prose with the code to plot the graphs and tables in a single source document from which his camera ready documents are generated. Should he ever need to reproduce any work his exact steps are all there in the source document. Yet another example of why org-mode is awesome ;-)