Today I updated Compiler Explorer to support better sharing, specifically to allow embedding a Compiler Explorer view into another site, useful for blog posts that wish to demonstrate how compilers generate code, or how language constructs actually become assembly.
For example, maybe you want to show off how well the compiler optimizes multiplying by a constant:
I’ve been running Compiler Explorer for over four years now, and a lot has changed in that time. In addition to C++, it now supports Go, Rust and D. It scales up and down to support demand. It has many different compilers, many self-built.
I’ve been asked by a couple of people recently how everything works, and so I thought I’d put some notes down here, in case it should help anyone else considering something similar.
In brief: Compiler Explorer runs on some Amazon EC2 instances, behind a load-balancer. DNS routes to the load balancer, which then picks one of the instances to actually send the request to. In fairness, most of the time I only have one instance running, but during rolling updates (and high load) I can have two or so.
Earlier this year I gave another presentation on jsbeeb at the GOTO Chicago conference. The good folks at GOTO have just uploaded the video to YouTube and you can watch it here:
After last time’s analysis of the Arrendale BTB, I thought I should take a look at more contemporary CPUs. At work I have access to Haswell and Ivy Bridge machines. Before I got too far into interpretation, I spent a while making it as easy as possible to remotely run tests, and graph. The code has improved a little in this regard. For completeness, this article was written with the code at SHA hash ab8cbd1d.
The Ivy Bridge I tested was an E5 2667v2 and the Haswell was an E5 2697v3.
First up let’s try and see how many branches we can fit in the BTB:
Continuing on from my previous ramblings on the branch target buffer, I thought I’d do a quick follow-up with a little more investigation.
The next thing I looked in to was how many bits of the address are used for the tag. My approach for this was as follows: set N=2 and use very large D to place two different branches in the same set. Ordinarily we’d expect no resteers at all: the BTB is four-way so our two branches fit with room to spare.
However, if only a subset of the address is used as the tag, then if the branch addresses differ only in bits not used in the tag, then we should expect resteers. This is because the BTB erroneously thinks the two branches are the same. The mistake is found and corrected at the decoder, but a resteer is caused.
This time I’m digging into the branch target buffer (BTB) on my Arrendale laptop (Core i5 M 520, model 37 stepping 5).
The branch target buffer hints to the front-end that a branch is coming, before the instructions have even been fetched and decoded. It caches the destination and some information about the branch – whether it’s conditional, for example. It’s thought to be a cache-like structure, that has been hinted to be multi-level, like the memory caches. I wanted to find out how big the BTB is and how it was organized.
I noticed it had the top 20 played games…and I thought I’d do the same for jsbeeb.
So for the period 1st January 2015 to 16th February 2016, the top 20 disc images loaded from the “Stairway to Hell” archive are:
|Image||Number of loads||Percentage|
I’m in the middle of an investigation of the branch predictor on newer Intel chips. Read the previous article to get some background.
Where I left off I’d just decided to look into static prediction myself. I’ve previously used Agner Fog’s tools to do these kinds of investigations, and so that was my first stop.
Agner’s tools install a kernel module to give user-mode access to the hardware performance monitoring counters inside the Intel chips. Each CPU has four counters that can be used to count one of a number of internal CPU events. Agner’s tools then run micro-benchmarks while counting the various internal things going on inside the processor. This seems perfect for us!