I’ve recently updated the stylesheet of this website to include some mobile-first design principles. Now if I was a professional web developer, I’d have done this from the start. I am, however, just an interested amateur. C++ is my mother tongue, with Python following very closely behind it.
Fork
I started learning C++ when I was pretty young, about 11 or so. It was a strange time, thinking back: my friends and I would race around on our bikes after school, then we’d all go home for dinner, and after dinner I’d be programming. No idea what they’d do. Probably more normal things like video games or TV or something.
At that age, everything about programming was new and interesting. I think starting with something as complex as C++ ended up being an excellent decision I couldn’t have possibly fully understood at the time. To become proficient at C or C++, you need to understand what the compiler and computer are doing and what their delicate interplay is. With many languages, it’s common to say that the language does things under certain circumstances.
Take string concatenation in Java, for example. If you were to
write String a = b + c + d;
, it’d be converted in the successive application
of StringBuilder.append
. Technically, yes, the compiler does this, but it
does so to work around a limitation of the language. In C++, the programmers
must define what this means; the compiler just synthesises code using that
definition. If something stupid happens because of that, it’s your fault. C
is—in some ways—even worse because it’s so much simpler; your program will
do exactly what you’ve written, which might not be what you intended.
So with C and C++, the language doesn’t really do anything for you. It’s down
to the compiler and the standard library. I found this absolutely fascinating
when I was learning. Consider the main
function accepting arguments from the
operating system. When writing a Java program, the arguments are just there,
right? The language provides you with the arguments. With C and C++, the
arguments look like they’re just there, but the true entry-point of a C or C++
program (or any language that compiles to native) is just a location in memory.
The C standard library fetches the arguments from the operating system using
low-level API calls.
This detail is hidden in almost every language ever, but C and C++ just choose not to tell you. You can completely replace the standard library if you want, as long as you initialise a few registers (for both C and C++) and execute all the constructors for global objects (for C++), everything will work. Young me was absolutely amazed and fascinated by this because it felt “real.” Languages hosted by VMs (like Java and Python) didn’t impress me in the same way because they’re just programs. Very clever and well-implemented programs, of course, but because they’re just programs; you can do what you want. C’s standard library implementing certain things to allow you to write something that feels like it’s just a program made it “real.” Like I was peering behind the curtain. Like I was Neo when he could finally read the code.
There’s No Choice: Take the Red Pill
What’s deeper than interacting with the operating system itself? That felt like a natural question. I understood that programs are just collections of data and instructions, but what did that really mean? What really is an operating system? If it invokes my programs and they ask it questions, where does it get its information from?
By the time I started asking these questions, I was leaving school. Obviously computers are made of electronics; everyone knows computers contain loads of transistors which are “just like tiny little switches.” So the path before me couldn’t have been clearer: study electronics. I chose not to study computer science or physics because my curiosity got the better of me. I needed to know how all this stuff works.
I absolutely do not regret that decision, but it certainly didn’t have the desired outcome. Studying electronics just led to more questions. How do transistors work? So transistors aren’t little switches at all, that’s just one mode of operation, what else can they do? So modern processors contain billions of transistors, how on Earth do you manage that ludicrous level of complexity? Why does algorithm X beat Y in wall-clock time when Y has a lower computational complexity?
I became interested in supercomputing, too. Regular computers are amazing enough with their non-linear instruction pipelines, hierarchical bus structures, and cache architectures keeping the fierce maw of the ALU fed. Supercomputers take all these ideas to the extreme. Now we’re using these sorts of techniques at the same level that I was used to playing with: the interaction between programs and the operating system.
I took a PhD position studying massively parallel computing. Our project, SpiNNaker, aimed to present neuroscientists with a platform capable of simulating massive neural structures in biological real-time. The design intent was for simulations comprising a billion neurons embedded in a fabric of a trillion synaptic connections to run as fast as the biological equivalent would. That’s 1% of a human brain (near enough) as fast as you can think.
This put me in a wonderfully interdisciplinary position: I worked with hardware, software, and even a bit of neuroscience. I used FPGAs to implement communication and IO infrastructure for this massive machine. I wrote system software and tooling to actually get programs onto this massive machine. I wrote example programs for it and determined structures for executing software. I developed a shallow understanding of how brains are structured, how they grow, what synapses really are, how neurons work (roughly), and some interesting models that computational neuroscientists use. Thinking back, it was an amazing experience.
There’s No End to the Rabbit Hole
I took my undergrad to understand what was beneath the operating system. I took my postgrad to understand this more deeply and to learn how to use computers effectively. In a way, both choices were successful. In another way, they were impressively unsuccessful.
The most important lesson I learned from my PhD wasn’t anything to do with software or computers or electronics. It was far broader than that: truly interesting problems are everywhere. I’ve never studied as a computer scientist, but I feel like my PhD moved me closer to being one. Just as it allowed me to skirt the edges of neuroscience. Just as my undergrad allowed me to skirt the edges of physics. Just as trying to learn computer science after the fact allowed me to skirt the edges of pure mathematics.
Losing Touch with an Old Friend
This journey was truly awesome; it had its bad parts, but every journey does. I’d always wanted to be a scientist. Scientists have PhDs and now I have one, therefore I’m a scientist at last! So now what?
This journey had an unexpected side-effect, one I completely shrugged off because my focus was elsewhere: I became somewhat of an expert user of both C++ and Python. I became somewhat of an expert at structuring systems and software. To support the journey, these skills were essential; without them I couldn’t have made any progress, so to me, all this stuff was just obvious.
I say this not to boast, but to provide context; I don’t like to brag. These accidentally curated skills allowed to get a job pretty easily. I decided that getting a job was the obvious course of action. So I chucked my CV out there for C++ jobs and was made an offer as I was driving home from the interview. Clearly made a good impression right?
I’ve been at my current employer for a number of years now. I’ve recently been promoted, in fact. Unfortunately, this means I don’t often write code any more, but I still talk about it a lot and help my colleagues get things done. But over the last few years, I’ve been feeling increasingly apathetic towards software. I started feeling like everything was unacceptably difficult.
Now C++ gets a bad rap from many people for being too complex, often being considered an “expert’s language.” I can’t really argue against that; C++ literally lets an engineer do anything on any platform in almost any style of programming. With such a rich feature-set, it’s only natural that it’ll be a gigantic and confusing language. I think this image is due to a lot of people trying to exercise their expertise by using as many features of the language as they possibly can, as often as they possibly can. You can write perfectly reasonable programs using a tiny fraction of the features-set, and such programs can be surprisingly concise, readable, scalable, and they’ll perform well right out of the box.
I’m obviously biased because I’m one of those people who really likes C++, but
even someone as fanboyish as me can recognise that actually building C++
programs can be a real exercise in frustration. Contrast this to Python:
typically you just create a virtualenv, pip install
a bunch of stuff, and
crack on with solving the problem at hand. The environment and even the
operating system rarely ever matter in this process. Just sit down, spend a few
minutes getting ready, then get to programming. C++, however, seldom starts
this smoothly. Getting your environment ready can often involve building a load
of stuff from source (which is usually especially painful on Windows) and
working out how to make use of a load of cool libraries on GitHub because you
can’t simply c++-package-manager install
them. For this reason, learning
Python is considered by most people (including myself) as a fun experience,
making programming fun again. I felt like that when I learned it all those
years ago.
Learning C++ was fun in the beginning because battling through all those difficult build problems and collating libraries and building things from source and whatever else are all just part of the learning process; they’re just problems to solve that feel naturally occurring at the time. After a while, however, this process becomes immensely trying. I don’t want to spend that time setting up, I just want to code…
An Alternative Outlet
I’ve wanted a blog like this for literally years. My original intention was to blog throughout my PhD. I thought it would yield many interesting stories and experiences. It certainly didn’t fail to deliver, and I have plenty of drafts to show that. Unfortunately, though, I could never settle on a blogging platform. I tried Wordpress for a while, but I didn’t really get on with it.
Probably because it felt too high-level. As with learning C++ all those years ago, I wanted to understand the internet. I wanted the blog to be a project as well as a creative outlet. So I began the years-long tinkerthon that ultimately arrived at this site finally being live.
JavaScript is Like C++ and Python…
…but not always in a good way. There are like four hundred trillion packages
on npm
. This is obviously awesome because a) there’s a package manager (like
Python) and b) there’s probably a library that solves most of the issues you
don’t care about, allowing you to focus on what you do (like C++ and Python).
However, it’s also extremely daunting because there are about a thousand ways to
do the same thing, each with their own merits and demerits. I found it
overwhelming at first.
Especially with so many ways to do front-end dev. I use Qt in C++ pretty frequently, which I find to be refreshingly easy to use for almost everything I care about (except for customising look’n’feel; stylesheets—whilst easy—only go so far…). I’ve also used WPF in C# and have dabbled with Xamarin. They all use forms of MVVM, so it’s not like I’m a front-end noob. However, none of the JavaScript front-end toolkits I played with “felt” right. I don’t know how to quantify that, either.
Then I discovered React and it made sense to me immediately. It felt
sufficiently similar to XAML/QML for me to just pick up the basics without
really thinking too much. Couple this with Browsersync (or whatever
create-react-app
uses) and you have a very productive development environment.
I can just save stuff and view the changes immediately. Even though I
understand (roughly) how it works, it’s still awesome to behold.
One day during the tinkerthon, sitting there tweaking SCSS (because it’s a better UX than straight CSS) and React components, a feeling washed over me. A familiar feeling, but one I hadn’t felt for a while. I was having fun. I could do something and see changes immediately, the feedback loop is incredibly effective at driving me into a flow state because there’s so little friction. Working with C++ in my day job (with an old compiler containing known bugs and arcane restrictions) now felt tedious by comparison. Extremely tedious, in fact.
Rediscovering an Old Friend
I blamed C++ for this feeling. Perhaps all these voices shouting that C++ was an “experts’ language” were right all along? I looked over code that I, myself, had written and thought “there’s just so, so much of it, and what does it really do?” Does the effort really justify the outcome? I could have achieved similar things in orders of magnitude fewer lines of Python or JavaScript.
It felt so weird. I’d been a proponent of C++ for literally decades, insisting that its reputation isn’t fair on it, especially since the introduction of C++11. “You can do anything in C++!” I’d proudly said on many occasions, but now I found myself thinking “Sure, you can, but why would you?”
I played with React-and-friends more and more. It became more fun as I learned more. The ecosystem mostly just works, and because there are so many people using it, solving weird problems is usually just a Google search away. Often, those answers hint at other things to learn, so not only does it fix your problem, you’re also given the opportunity to discover a load of new things. It’s great!
Then, out of nowhere, an opportunity arose during my day-job that seemed utterly benign at first. I had to write some code for issue to customers, meaning it couldn’t use any of our core code and it had to be portable. So I did what was obvious: I targeted C++17 (because our customers were using MSVC which almost fully supports it) and used CMake as the build tool. No more arcane policies, no more old compiler, just modern C++ and standard tools.
You can guess what feeling appeared, I’m sure. C++17 is awesome. Structured
bindings are the best things in the world ever. This is a snippet of C++
iterating over all the keys and values of a std::map
(a dictionary/associative
array if you come from other languages):
for (const auto &[key, value] : my_map)
cout << key << " = " << value << '\n';
C++ weirdisms aside (e.g., const-correctness, references, cout
using the shift
operator), this expresses intent so perfectly. For each (key
, value
) pair
in my_map
, print it to the terminal. Contrast this to Python:
for key, value in my_map.items():
print('{} = {}'.format(key, value));
It’s pretty similar, but the C++ variant performs significantly faster (probably).
Let’s look at the old way of doing this in C++:
const map<string, SomeType> &my_map = create_my_map();
for (map<string, SomeType::const_iterator it = my_map.begin();
it != my_map.end(); ++it)
{
const string &key = it->first;
const SomeType &value = it->second;
cout << key << " = " << value << '\n';
}
C++11 made this slightly better:
const auto &my_map = create_my_map();
for (auto it : my_map)
{
const string &key = it->first;
const SomeType &value = it->second;
cout << key << " = " << value << '\n';
}
But what’s this it->first
/it->second
business all about? Those who already
know C++ understand it, but to beginners, I can only imagine it’s bizarre and
arcane. The C++17 example is just so much simpler to understand, to beginners
and old-hands alike.
Discovering a New Friend
Building C++ software has always been slightly tedious to me. I’ve primarily
worked with Visual Studio because I am, at heart, a GUI-peasant. I like the
terminal and use it all the time—I wouldn’t be without it—but for some
problems, GUIs are just inarguably better. I cannot, for example, understand
why people enjoy using gdb
directly when programs like VS Code allow you to
just click on lines to add breakpoints and inspect the value of any variable
without needing to know where it is.
Starting with Visual Studio, in many ways, may have been a mistake. By starting
with a visual debugger, perhaps I’ve robbed myself of some insight achieved by
using gdb
or cdb
in the terminal. I’ve especially shielded myself from
writing Makefiles
as much as possible. With VS, just add files to a project
and click build; simples. On Linux, I’ve used qmake
a fair amount because of
all the Qt-related work I’ve done. Qt Creator natively understands qmake
files and provides a very similar UX to Visual Studio, so that’s always been
fine. For the majority of people, however, CMake is the tool of the trade,
not QMake. Qt itself is moving towards CMake too, so it’s something that I’ve
been meaning to learn for ages.
CMake doesn’t have the nicest syntax in the world, in my opinion, but it does
mostly make sense. I had to write a few Find*.cmake
modules to get my small
project working. I’d never done this before, but it only took about a morning.
Going from a complete novice in CMake who’d only ever used it to build simple
applications because I find Make annoying, to a slightly-less-than-complete
novice able to write a moderately complex CMake project so quickly felt… fun.
I had to support some code generators, so there was a need to invoke the
generator programs and feed their output back into the project so the files
would be compiled. To make things slightly more awkward, I was using a protocol
that has been standardised, but there are multiple libraries providing
implementations of it. The API is standard, too, so it largely doesn’t matter
which one you use. Naturally, I wanted to be able to just write
find_package(ThisProtocol)
and have it work, without caring which
implementation was installed on my (or our customer’s) system.
To my surprise, this was dead easy too. I didn’t spend hours and hours setting my system up and collating libraries. Instead I spent hours writing these scripts (whilst simultaneously learning how to do it) and ended up with a very simple system that did almost all the work for me, allowing me to focus on the problem at hand. Sort of like… Python? …or with Node?
Couple this with C++17’s expressiveness and simplicity, I had familiar feeling wash over me again…
So My Friends Aren’t the Problem…
I often say “I like software, but I hate software.” Well, I’ve often thought something along those lines, but a friend put it so eloquently. Naturally I took this feeling to be towards all the faff that surrounds just solving the problem at hand. After a bit more thought, though, I’m not convinced this is true.
If it were, why have I enjoyed the setup behind this blog so much? I’ve found learning about AWS and cloud services in general to be really rewarding. Learning Gatsby has been fun, as has playing with React. I’ve even made a few little toy applications using React, REST APIs, GraphQL, AWS, Firebase, etc. just to play with all this stuff. Part of me explained this away as just peering behind the cloud-curtain, just as I’d done with C and C++ years ago.
Then I thought about the C++17/CMake experience. I had to write a load of framework-like code to make that program useful. Things like loading plugins, minimising the amount of boilerplate required to add new features, tailoring some of the particularly annoying things toward code-generation, writing the CMake scripts in a generic way (that I didn’t really need to). All of this was fun, too, despite being 100% in front of the curtain.
So What is the Problem, Then?
API design.
To me, API design and UX design occupy the same thought-space: an API is a user experience, where your users are your fellow engineers. When designing some new library, you can’t just think of the problem it solves. People need to use this thing, after all.
I hypothesise that for any given problem there’s some minimum amount of complexity that it infers. No piece of code, regardless of the heroic feats of engineering that went into it can possibly be less complex than this. This may seem obvious, but stick with me. As computer scientists and engineers, we automatically generalise solutions because many problems we face do look reasonably similar to each other. So we design APIs that allow their users to solve several problems similar to the core problem we’re solving.
This sounds great, but it isn’t free. As we make our libraries more and more general, we increase the complexity of the API. In a sense, even in the best possible case, we approach that finite complexity minimum I posited a moment ago. More often than not, however, we blow right past it and now have some complicated thing on our hands that we just accept because the problem itself is complicated. But the whole point of an API is to hide all this; it’s to present a vastly simplified view of the problem to its users, allowing people to solve this problem without needing to be experts.
In a sense, code-complexity (not computational complexity) is a bit like a currency. You “spend” some complexity in one area of a program/library to “buy” simplicity elsewhere. Consider something like cURL or Apache Thrift. Their APIs are pretty simple despite them handling a load of complex interactions behind the scenes. Sure you lose a degree of control over how these things happen, but the productivity gain is enormous. So if nothing is offered in return for the complexity, it’s a “bad investment.”
A library designed to solve a complicated problem that fails to reduce the apparent complexity of the problem has only described the problem in a different way. It hasn’t added any value to the user. It’s a poor UX.
Alright, How Do We Avoid This?
Start by bounding the scope of a library or tool. It sounds trivial, but it’s important. The temptation to generalise will always be there, and I’m not suggesting that general solutions should never be sought again. I’m suggesting that a note is made on a roadmap. Look in the direction of generality but remain in the present, with the fundamental problem. In project management, scope creep is the subversive enemy lurking around every corner. The same is true here.
Things fundamental to an API should be trivial. If a library forces some concept on its users, it should be utterly trivial to learn and apply. It might be blindingly clever and amazingly mathematically pure or whatever, but if users can’t apply it properly, the library will never be broadly useful.
Creating a cargo cult is a dreadful code-smell. This symptom is surprisingly easy for some people to overlook. “It’s quite easy really, I just start from a template and fill in the gaps,” I’ve heard on occasion. This is copy-paste programming. Hearing this should set off alarm bells because the library author has missed a huge opportunity for the simplification of the API. Sure it increases the complexity of the implementation of the library but it provides a better UX; you get a better return on investment. The complicated stuff doesn’t need to be removed from the interface, just add some helpers. Many libraries offer a primary API and a simple one. The simple one just implements a few common use-cases on top of the primary one.
Artificially creating a base-class to hold common actions is probably a bad idea, even in a language like C++ that allows multiple inheritance. Take a moment to think about what inheritance really means: it’s a hierarchy of objects that are complete or meaningful in their own right; think Liskov. There’s nothing wrong at all with creating helper/worker objects/functions as implementation details to share functionality, but treat the inheritance hierarchy with the respect it deserves.
Inheritance hierarchies are spectacularly useless at modelling some heterogeneous domains. Just use composition, seriously. It will make your and your users’ lives easier. The implementation will probably become a bit more complicated but it’ll drastically simplify the UX. Sensible inheritance hierarchies mesh extremely well with this concept. Each component from which a higher-level object is composed can exist within its own inheritance hierarchy, allowing the library to better map onto its problem domain.
Join
So what does all this have to do with the mobile-first stylesheet I mentioned in the first paragraph of this post?
There were very few obstacles preventing me from making the (thankfully few)
changes I had to make for this blog to be responsive. I literally just run
gatsby develop
, open a few different browsers, point them all at the
development server that spawns, and make changes to the SCSS. Saving the file
recompiles and refreshes the page. The feedback loop is incredibly tight,
meaning there’s very little there to knock me out of the flow. This UX can’t
help but be fun. I just solve the problem at hand. There are no detours or
distractions.
And that’s what made me think about why all this webdev has been so rewarding to
me. A hardware guy. A native programming guy. It felt funny to be enjoying
nudging text a few fractions of an em
here and thickening a line by a few
pixels there. It’s so easy to forget that APIs and tools have a UX too, and
that it can smother their users’ passion if the designers aren’t careful. Let’s
all agree to build empowering user experiences; from the backend, through the
APIs, to the frontend.
Programming should be fun.