I recently made a LinkedIn post about unnecessary performance optimizations. This was inspired by an anecdote about how using functions in the 60s and 70s was controversial because of function call overhead. That very likely mattered with 1960s hardware, but is irrevelant in the vast majority of software built today. And when it does matter, it is more likely an issue with the programming language than the hardware. The only time I had to deal with function call overhead was in PHP 5.2 where inlining some code reduced a 40 second operation to 400 ms. You wouldn't have seen that in any other language in 2011 though.
Despite this, a lot of programmers seem to think that function call overhead is worth worrying about in 2024, as evidenced by the comments in that post. I think that is a big problem. I've seen many startups fail to take off because the technical co-founder or founding engineer was wrapped up in micro-optimizations. I don't doubt that larger companies have gone under because of this.
The title of this post is "Why Don't Boxers Shave Their Eyebrows?" because I think that is the perfect analogy for this issue. One could make an argument that they should. Eyebrows have weight don't they? That's weight that doesn't contribute to the boxer's performance. Non-functioning weight would take up more stamina to carry around the ring. It could also affect the weigh in, forcing a boxer to a higher weight class. Plus there's extra air resistance increasing the amount of effort required to move even further.
Context matters though. Will carrying around an extra 5 pounds affect a boxer's stamina? Sure. However, eyebrows weigh less than a milligram on average. There are 453,592 milligrams in a single pound. Eyebrows have *some* weight, but it is so little that it is irrelevant. Time spent worrying about small things like shaving eyebrows is time that could be spent training or recuperating. No one in their right mind would think a micro-optimization like that is worthwhile in boxing.
It's easy to do the equivalent in programming though. Every engineer, including myself, has likely done the equivalent multiple times in their career. Doing something physical like boxing gives you a tangible frame of reference. You can see the size of a person and the size of their eyebrows. You have intuition about the weight of hair versus the weight of bone or muscle tissue.
We don't have that when programming. There are no physical representations of memory blocks or CPU cycles in front of us to use as a frame of reference. You can hold a CPU, but you can't hold a CPU cycle. It's all just in our heads as concepts. We can run benchmarks, but that requires code. How do we know that code is running the benchmarks accurately? There isn't anything tangible to represent the logic or data structures used. They too are just concepts in our heads. At best you can write them down, but that's still not the same as seeing and touching something.
The lack of a physical representation of what we are working with makes it significantly more difficult to take into account context. Let's use this code I wrote for calculating amortization tables as an example.
It runs at O(n^3). If you don't know what that is, it means I would never pass a technical interview at most tech companies with this code. And yet, it obviously completes faster than a person can type. That’s true for loans up to 100 years, or where n is 1200. That makes the inefficiency of the algorithm irrelevant.
When does that inefficiency become relevant though? We would never have a loan with a million months as the term. Trying to account for that would be the equivalent of shaving eyebrows. Yet, the algorithm could theoretically be used for something else where it makes more sense to have a million data points. At that point, this code would negatively impact the user experience. Yet, where is the line? Is it 500,000 data points? 100,000? 50,000? 50,010? And while it is obvious that having more than a few thousand months is extremely inlikely, developers often deal with data where the original intent is small, but eventually balloons into a large number. The line is complicated even further by the fact that performance doesn't degrade linearly. At some point, you could run out of memory and start paging. Performance drops like a rock at that point.
Nothing scares humans more than uncertainty. The uncertainty of how the software we built will be used causes many developers to default to optimizing for extreme cases. It's similar to when business leaders make safe choices.
If the software never needs to reach scale and you made it scale, then you have the comfort of knowing that your code is still running well.
If the software did need to reach scale and you failed to make it scale, then you have to deal with the consequences of your code failing spectacularly.
And if you saved time by delivering software faster and it didn't need to scale, well your boss (who is most likely an engineer) will probably not care too much other than be glad they can hand you more work.
Most people running businesses would prefer to deliver software to their customers quicker. Yet, the incentives created by the culture at the companies they run reward engineers for shaving eyebrows. This has resulted in many products effectively tread water for months, if not years, while their competitors surpass them. I wish there was an easy solution for this, but there probably isn't. The main one I can think of is including engineers in product and business conversations. That's a topic for another time.