While working on a major re-factor of QEMU’s softfloat code I’ve been doing a lot of benchmarking. It can be quite tedious work as you need to be careful you’ve run the correct steps on the correct binaries and keeping notes is important. It is a task that cries out for scripting but that in itself can be a compromise as you end up stitching a pipeline of commands together in something like perl. You may script it all in a language designed for this sort of thing like R but then find your final upload step is a pain to implement.
One solution to this is to use a literate programming workbook like this. Literate programming is a style where you interleave your code with natural prose describing the steps you go through. This is different from simply having well commented code in a source tree. For one thing you do not have to leap around a large code base as everything you need is on the file you are reading, from top to bottom. There are many solutions out there including various python based examples. Of course being a happy Emacs user I use one of its stand-out features org-mode which comes with multi-language org-babel support. This allows me to document my benchmarking while scripting up the steps in a variety of “languages” depending on the my needs at the time. Let’s take a look at the first section:
1 Binaries To Test
Here we have several tables of binaries to test. We refer to the
current benchmarking set from the next stage, Run Benchmark.
For a final test we might compare the system QEMU with a reference
build as well as our current build.
Binary title /usr/bin/qemu-aarch64 system-2.5.log ~/lsrc/qemu/qemu-builddirs/arm-targets.build/aarch64-linux-user/qemu-aarch64 master.log ~/lsrc/qemu/qemu.git/aarch64-linux-user/qemu-aarch64 softfloat-v4.log
Well that is certainly fairly explanatory. These are named org-mode tables which can be referred to in other code snippets and passed in as variables. So the next job is to run the benchmark itself:
2 Run Benchmark
This runs the benchmark against each binary we have selected above.import subprocess import os runs= for qemu,logname in files: cmd="taskset -c 0 %s ./vector-benchmark -n %s | tee %s" % (qemu, tests, logname) subprocess.call(cmd, shell=True) runs.append(logname) return runs
So why use python as the test runner? Well truth is whenever I end up munging arrays in shell script I forget the syntax and end up jumping through all sorts of hoops. Easier just to have some simple python. I use python again later to read the data back into an org-table so I can pass it to the next step, graphing:set title "Vector Benchmark Results (lower is better)" set style data histograms set style fill solid 1.0 border lt -1 set xtics rotate by 90 right set yrange [:] set xlabel noenhanced set ylabel "nsecs/Kop" noenhanced set xtics noenhanced set ytics noenhanced set boxwidth 1 set xtics format "" set xtics scale 0 set grid ytics set term pngcairo size 1200,500 plot for [i=2:5] data using i:xtic(1) title columnhead
This is a GNU Plot script which takes the data and plots an image from it. org-mode takes care of the details of marshalling the table data into GNU Plot so all this script is really concerned with is setting styles and titles. The language is capable of some fairly advanced stuff but I could always pre-process the data with something else if I needed to.
Finally I need to upload my graph to an image hosting service to share with my colleges. This can be done with a elaborate curl command but I have another trick at my disposal thanks to the excellent restclient-mode. This mode is actually designed for interactive debugging of REST APIs but it is also easily to use from an org-mode source block. So the whole thing looks like a HTTP session::client_id = feedbeef # Upload images to imgur POST https://api.imgur.com/3/image Authorization: Client-ID :client_id Content-type: image/png < benchmark.png
Finally because the above dumps all the headers when run (which is very handy for debugging) I actually only want the URL in most cases. I can do this simply enough in elisp:#+name: post-to-imgur #+begin_src emacs-lisp :var json-string=upload-to-imgur() (when (string-match (rx "link" (one-or-more (any "\":" whitespace)) (group (one-or-more (not (any "\""))))) json-string) (match-string 1 json-string)) #+end_src
The :var line calls the restclient-mode function automatically and passes it the result which it can then extract the final URL from.
And there you have it, my entire benchmarking workflow document in a single file which I can read through tweaking each step as I go. This isn’t the first time I’ve done this sort of thing. As I use org-mode extensively as a logbook to keep track of my upstream work I’ve slowly grown a series of scripts for common tasks. For example every patch series and pull request I post is done via org. I keep the whole thing in a git repository so each time I finish a sequence I can commit the results into the repository as a permanent record of what steps I ran.
If you want even more inspiration I suggest you look at John Kitchen’s scimax work. As a publishing scientist he makes extensive use of org-mode when writing his papers. He is able to include the main prose with the code to plot the graphs and tables in a single source document from which his camera ready documents are generated. Should he ever need to reproduce any work his exact steps are all there in the source document. Yet another example of why org-mode is awesome 😉
I’ve just returned from a weekend in Brussels for my first ever FOSDEM – the Free and Open Source Developers, European Meeting. It’s been on my list of conferences to go to for some time and thanks to getting my talk accepted, my employer financed the cost of travel and hotels. Thanks to the support of the Université libre de Bruxelles (ULB) the event itself is free and run entirely by volunteers. As you can expect from the name they also have a strong commitment to free and open source software.
The first thing that struck me about the conference is how wide ranging it was. There were talks on everything from the internals of debugging tools to developing public policy. When I first loaded up their excellent companion app (naturally via the F-Droid repository) I was somewhat overwhelmed by the choice. As it is a free conference there is no limit on the numbers who can attend which means you are not always guarenteed to be able to get into every talk. In fact during the event I walked past many long queues for the more popular talks. In the end I ended up just bookmarking all the talks I was interested in and deciding which one to go to depending on how I felt at the time. Fortunately FOSDEM have a strong archiving policy and video most of their talks so I’ll be spending the next few weeks catching up on the ones I missed.
There now follows a non-exhaustive list of the most interesting ones I was able to see live:
Dashamir’s talk on EasyGPG dealt with the opinionated decisions it makes to try and make the use of GnuPG more intuitive to those not versed in the full gory details of public key cryptography. Although I use GPG mainly for signing GIT pull requests I really should make better use it over all. The split-key solution to backups was particularly interesting. I suspect I’ll need a little convincing before I put part of my key in the cloud but I’ll certainly check out his scripts.
Liam’s A Circuit Less Travelled was an entertaining tour of some of the technologies and ideas from early computer history that got abandoned on the wayside. These ideas were often to be re-invented in a less superior form as engineers realised the error of their ways as technology advanced. The later half of the talk turns into a bit of LISP love-fest but as an Emacs user with an ever growing config file that is fine by me 😉
Following on in the history vein was Steven Goodwin’s talk on Digital Archaeology which was a salutatory reminder of the amount of recent history that is getting lost as computing’s breakneck pace has discarded old physical formats in lieu of newer equally short lived formats. It reminded me I should really do something about the 3 boxes of floppy disks I have under my desk. I also need to schedule a visit to the Computer History Museum with my children seeing as it is more or less on my doorstep.
There was a tongue in check preview that described the EDSAC talk as recreating “an ancient computer without any of the things that made it interesting”. This was was a little unkind. Although the project re-implemented the computation parts in a tiny little FPGA the core idea was to introduce potential students to the physicality of the early computers. After an introduction to the hoary architecture of the original EDSAC and the Wheeler Jump Mary introduced the hardware they re-imagined for the project. The first was an optical reader developed to read in paper tapes although this time ones printed on thermal receipt paper. This included an in-depth review of the problems of smoothing out analogue inputs to get reliable signals from their optical sensors which mirrors the problems the rebuild is facing with nature of the valves used in EDSAC. It is a shame they couldn’t come up with some way to involve a valve but I guess high-tension supplies and school kids don’t mix well. However they did come up with a way of re-creating the original acoustic mercury delay lines but this time with a tube of air and some 3D printed parabolic ends.
The big geek event was the much anticipated announcement of RISC-V hardware during the RISC-V enablement talk. It seemed to be an open secret the announcement was coming but it still garnered hearty applause when it finally came. I should point out I’m indirectly employed by companies with an interest in a competing architecture but it is still good to see other stuff out there. The board is fairly open but there are still some peripheral IPs which were closed which shows just how tricky getting to fully-free hardware is going to be. As I understand the RISC-V’s licensing model the ISA is open (unlike for example an ARM Architecture License) but individual companies can still have closed implementations which they license to be manufactured which is how I assume SiFive funds development. The actual CPU implementation is still very much a black box you have to take on trust.
Finally for those that are interested my talk is already online for those that are interested in what I’m currently working on. The slides have been slightly cropped in the video but if you follow the link to the HTML version you can read along on your machine.
I have to say FOSDEM’s setup is pretty impressive. Although there was a volunteer in each room to deal with fire safety and replace microphones all the recording is fully automated. There are rather fancy hand crafted wooden boxes in each room which take the feed from your laptop and mux it with the camera. I got the email from the automated system asking me to review a preview of my talk about half and hour after I gave it. It took a little longer for the final product to get encoded and online but it’s certainly the nicest system I’ve come across so far.
All in all I can heartily recommend FOSDEM for anyone in an interest is FLOSS. It’s a packed schedule and there is going to be something for everyone there. Big thanks to all the volunteers and organisers and I hope I can make it next year 😉
* Now builds for Firefox using WebExtension hooks
* Use chrome.notifications instead of webkitNotifications
* Usewith style instead of inline for edit button
* fake “input” event to stop active page components overwriting text area
* avoid calling make-frame-on-display for TTY setups (#103/#132/#133)
* restore edit-server-default-major-mode if auto-mode lookup fails
* delete window when done editing with no new frame
Get the latest from the Chrome Webstore.
A couple of weeks ago I mused that I should really collect together the various hacks to integrate checkpatch into my workflow into a consistent mode. Having a quick look around I couldn’t find any other implementations and went to create the said mode. It turns out I’d created the directory and done the initial commit 3 years ago. Anyway I polished it up a bit and you can now get it here. I hope it’s useful to the wider community and as ever patches welcome 😉
Since I started working on aarch64 support for QEMU the most frequently asked question I got was “when can I run aarch64 system emulation on QEMU?”. Well wait no more as support for a VIRT-IO based aarch64 board was recently merged into the master branch of QEMU. In this post I’ll talk about building QEMU, a rootfs and a kernel that will allow you to start experimenting with the architecture.
I’m fairly atypical as a Linaro employee because I have a desk in a shared office. This means I get to participate in technical banter with the other employees based in the Cambridge office. However it has become quite clear I’m surrounded on all sides by VIMers with only one potential convert who wants to try Emacs out “one day”. As a result I thought it might be nice to have an Emacs Birds of a Feather (BoF) session at LCU14?
BoF sessions are basically an informal gathering where people with a shared interest who come along to swap tips and stories about their area of interest. In the context of LCU it would be an opportunity to network and meet fellow Emacers. So any interest?
I’ve recently started a new job at Linaro which has been keeping me very busy. My role combines the low-level fun of Dynamic Binary Translation that I so enjoyed at Transitive and the fact Linaro is a fully Open Source company. I work directly on the upstream projects (in this case QEMU) and with the project community. In many ways it’s my ideal job.
Of course I’ve quickly built up a reputation as the Emacs guy in the office surrounded by a sea of VIMers although one of the guys does profess a desire to learn Emacs “one day”.
One of the first things I did was move all my email into Emacs. I’d long been dissatisfied with the state of Thunderbird (and previously Evolution) that I took the opportunity to migrate to mu4e. I spend my days slinging patches and going through mailing lists so I really appreciate being in my editor while I do that.
I have also started streamlining some of my work-flow with a few specialised extensions to Emacs. I think I’m finally comfortable enough with elisp to have a pretty good stab at solving most problems. I’m also appreciating the ability to leverage the mass of Emacs code under the hood to make incremental tweaks rather than solve everything at once. I thought I might give you a tour of the code I’ve written so far.
First up is risu-mode. RISU is the code testing tool we’ve been using to verify our aarch64 ARM work in QEMU. The .risu file is a template that’s used to specify instruction patterns that the tool then uses to generate random code sequences with. risu-mode is really just a bunch of regex expressions wrapped in the mode machinery that highlights the elements on the page. It doesn’t sound like much but when you are working through a bunch of patterns looking for bugs it’s easier on the eye when the different elements are coloured.
Next thing I wrote was my own QEMU mode which is a simple comint-mode based mode for launching QEMU system emulation. It’s still very rough and ready as I’m mostly working on user emulation but I suspect it will be handy once I start on system emulation stuff.
Finally there is lava-mode. LAVA is Linaro’s automated test and validation framework. Although it already provides command line and web interfaces I thought it would be nice to launch and track test jobs from within Emacs itself. The job control files a JSON based so I built on the existing json-mode and a slightly patched version of xml-rpc.el to add job submission. I’ve started a simple job tracking mode that uses the tabulated-list-mode framework and eventually I’ll link it into the tracking library so job completion will be as seamless as my IRC work-flow.
So there you have it, a bunch of code that may well not interest anyone else but shows how Emacs provides a very rich base of functionality on which to build up tools that are useful to you.
Does anyone else want to share an example of their esoteric extensions? What’s the most obscure thing you’ve built on top of our favourite text editor?
I’ve just pushed the latest version of Edit with Emacs to the Chrome App Store. Hopefully most people are already tracking the latest edit-server.el via MELPA but this does introduce a few minor fixes to the extension itself. A new piece of functionality is the ability to trigger bringing Emacs to the foreground from a key-stroke within Chrome. I added this to support running Emacs on ChromeOS which together with my chromebooks.el package gives me a rather nice development environment without having to dump ChromeOS.
So new for v1.13
* Change the handling of hidden elements (fix bug #78)
* Add debugging for erroneous hidden text areas (#93)
* Add keyboard shortcut to bring Emacs to foreground
* Pass clipboard contents to foreground request
* add advice to save-buffers-kill-emacs to avoid prompting on shutdown
* add autoload cookies
* fix bug with format chars in url (#80)
* don’t call kill buffer hooks twice (#92)
* don’t set-buffer-multibyte on process buffer
* support the “foreground” request with optional clipboard contents
Get the latest from the Chrome Webstore.
We’re are about halfway through our family holiday to the remote ends of the earth. It has been the first time we’ve taken Ursula on a plane so we thought we’d make it a big journey while we are at it.
To her credit she was mostly fine with the 24 hours on a plane required to get to the other side of the world. Most of the tears were during take-off and landing when it was hard to explain pressurisation to a 16 month old child. There were a few other snatches of complaint due to tiredness but otherwise it went well. It helps that she is a very cute child who instantly won over the cabin crew who were keen to help keeping her amused. She even had a freshly prepared meal of Salmon Fried Rice cooked for her by the First Class cabin crew.
Before we left the UK I had left Ursula playing in the kitchen while I sorted something out in the living room. When I came back into the suspiciously quiet kitchen I found the following example of toddler OCD:
An interesting aspect of having psychologist for a mother-in-law is the wonderful insight she gives me on how the mind works. It had only been few weeks earlier that we had been talking about a common behaviour that often precedes a spurt in language development. It seems as children start getting their heads around the concept of things belonging to categories they will start sorting their toys (or anything else) into organised piles. Obviously an understanding of the fact things can exist in categories is a prerequisite for understanding a lot of things about language.
I make the distinction between language and speech because the two are very different skills. Language is primarily a cognitive ability to map communicated ideas to abstract concepts. Speech is the vocalisation of that communication and involves fairly precise control of a dizzying array of muscles in our mouth and vocal chords. The mastery of this physical skill takes a lot longer so often the distinction between words is only recognisable to parents and others who spend a lot of time with the child.
Ursula has been understanding basic instructions for some time now and it’s now possible to send her off to fetch or carry things with a reasonable degree of success. Combined with her deistic pointing there has been genuine two way communication for some time. However perhaps due to fluke or stimulated by the new environment she’s in on holiday we are starting to see an explosion in words. She’s had the basic “Dadadada” and “Momamama” for sometime although it’s hard to distinguish from the baby babbling she’s been doing for a long time. We have long joked about her generic use of “dat/cat” for the cat and then pretty much any other object she was pointing at. Just before we left she had started associating “NaNa” with bananas (a favourite food of hers). We now have distinct sounds for birds, cats, dogs and my favourite “papa” for the Nexus 7 which we call the PadPad so as to avoid confusing it with the Apple brand product 😉
My mum found my description of all this behaviour very amusing as I swing between proud Dad and scientific curiosity. I will put on the record that I’m not treating my daughter as a lab experiment but I do find the whole development of language and mind fascinating. I understand now why watching your kids grow and develop is so often cited by parents as one of the main joys of parenthood.
You can probably tell the sort of on-line company I keep from the deluge of noise on the social networks regarding Google’s decision to shut down Reader. However we shouldn’t be that surprised. In fact some companies that source content from Reader have anticipated the need to collect content themselves.
I of course will have to make a decision at some point. However I’ll not do it today like a lot of Reader users have. The rush to try out alternatives has over-whelmed some open source based projects who were quietly growing organically. I don’t envy those that have to suddenly gear up their back-end systems because an Internet behemoth gave us 2593 hours notice to sort out a replacement.
I’m mulling over the difference between self-hosting and having someone else do it. I’m not overly worried about going for convenience if I know I can get my data back if I need to. In fact the knowledge that you can theoretically self-host might be enough. To be fair to Google their Data Liberation team made exporting all my Reader data easy.
Before I make a choice I need to decide what my priorities are. Currently I subscribe to 250+ RSS feeds. Obviously I don’t read every single post but I make extensive use of tags to quickly process through stuff I do need to see when I need to see it. Aside from news, blog posts, funny cat pictures I also subscribe to other data feeds like bug trackers, code repositories, and other data sources. I of course want access to all of this data at any point on one of a number of devices. This makes a web hosted solution pretty much a must. There is no point having the data on my desktop when I’m somewhere else. From my point of view I want it to be open source compatible because if the company hosting now decides it no longer wants to I’ll only have to move the data and not break my work-flow.
It would also be very useful if it had a public API so others can interact with the data. I don’t need the solution to be all provided by one company. It’s perfectly fine to have multiple 3rd parties sorting out the Android integration. I might even look to doing something to integrate it with my favourite editor (the name of which even my non-geek readers probably know by now). So far my experiment with moving all of IRC and IM into Emacs seems to be working well and should be a subject of another post.
Are you a Reader user? What are your criteria for it’s eventual replacement? Is RSS just a dying protocol or is the need to aggregate and sift through data becoming more important?
There may well be a much better way of solving this problem around the corner. I certainly am open to persuasion. But don’t take away my current preferred solution until I’m convinced I’m ready to switch 😉