Until last week I had never been to a C++ conference before. I’m rather glad to say that I’ve now experienced the wonder of having a firehose of C++ knowledge plugged into my brain and turned on.
Most of the best times at the conference were in between talks, where random meetings in the hallways over coffee would yield fascinating discussions. I was flattered to have a fair number of people spot my name badge and come up and thank me for Compiler Explorer – a very surreal experience. I got a tiny taste of what it must be to be “famous”! I also got a lot of advice and ideas on how to improve the site, and once the dust settles a little I look forward to getting stuck into improvements, like more Microsoft compilers (and a better compilation experience for those using it), and execution support.
While the hallway chance encounters (and lunches and dinners) yielded a lot of great conversations, the talks were also full of information. Below is a small taste of some of the talks that left an impression with me:
Today I launched Compiler Explorer on Patreon - a site where one can pledge ongoing donations to content creators.
It was tougher decision than I expected. I spend a fair amount of cash and an awful lot of time on Compiler Explorer, but I’ve always seen it as a hobby. This now puts me a little closer to seeing it as a “job” of sorts. I hope this works out!
If you enjoy using Compiler Explorer and want to help out, please visit the new page on Patreon.
Today I updated Compiler Explorer to support better sharing, specifically to allow embedding a Compiler Explorer view into another site, useful for blog posts that wish to demonstrate how compilers generate code, or how language constructs actually become assembly.
For example, maybe you want to show off how well the compiler optimizes multiplying by a constant:
I’ve been running Compiler Explorer for over four years now, and a lot has changed in that time. In addition to C++, it now supports Go, Rust and D. It scales up and down to support demand. It has many different compilers, many self-built.
I’ve been asked by a couple of people recently how everything works, and so I thought I’d put some notes down here, in case it should help anyone else considering something similar.
In brief: Compiler Explorer runs on some Amazon EC2 instances, behind a load-balancer. DNS routes to the load balancer, which then picks one of the instances to actually send the request to. In fairness, most of the time I only have one instance running, but during rolling updates (and high load) I can have two or so.
Earlier this year I gave another presentation on jsbeeb at the GOTO Chicago conference. The good folks at GOTO have just uploaded the video to YouTube and you can watch it here:
After last time’s analysis of the Arrendale BTB, I thought I should take a look at more contemporary CPUs. At work I have access to Haswell and Ivy Bridge machines. Before I got too far into interpretation, I spent a while making it as easy as possible to remotely run tests, and graph. The code has improved a little in this regard. For completeness, this article was written with the code at SHA hash ab8cbd1d.
The Ivy Bridge I tested was an E5 2667v2 and the Haswell was an E5 2697v3.
First up let’s try and see how many branches we can fit in the BTB:
Continuing on from my previous ramblings on the branch target buffer, I thought I’d do a quick follow-up with a little more investigation.
The next thing I looked in to was how many bits of the address are used for the tag. My approach for this was as follows: set N=2 and use very large D to place two different branches in the same set. Ordinarily we’d expect no resteers at all: the BTB is four-way so our two branches fit with room to spare.
However, if only a subset of the address is used as the tag, then if the branch addresses differ only in bits not used in the tag, then we should expect resteers. This is because the BTB erroneously thinks the two branches are the same. The mistake is found and corrected at the decoder, but a resteer is caused.
This time I’m digging into the branch target buffer (BTB) on my Arrendale laptop (Core i5 M 520, model 37 stepping 5).
The branch target buffer hints to the front-end that a branch is coming, before the instructions have even been fetched and decoded. It caches the destination and some information about the branch – whether it’s conditional, for example. It’s thought to be a cache-like structure, that has been hinted to be multi-level, like the memory caches. I wanted to find out how big the BTB is and how it was organized.