We rarely think about correctness and efficiency as unified sides of the same coin. There are tools to ensure high correctness and different tools to improve efficiency. We often gladly sacrifice one for the other, depending on our requirements. But why can’t we seem to have both?
I believe they’re both achievable. One of the problems along the way is how we develop. We must seek a better understanding of development goals, requirements and overall development history. It’s often during day-to-day development that a pattern gets slapped onto the solution in order to move on in the supplied requirements list. This is a great example of lacking goals, and “the wheel is reinvented” in a way you don’t even expect.
Zoom out and look at the industries. In their lifetime there are points when maximizing yields is a priority over re-designing or innovating. And I’m not talking about premature optimization or optimization in general. Forget these terms for now. I want to talk about yields. In some aspects of software development, we’ve actually moved far lower than low yields. With layers and layers of mostly run-time abstraction, hardware’s been somewhat forgiving, but now there’s another problem. We need to be more aware and demanding again of how to attain the maximum yield based on the requirements.
Maximum yields are not always achievable, but are goal must-have. You must run your stuff without overheads to the greatest extent possible. After all, “free lunch is over” and human expectations of performance grow much faster than hardware capability.
A while ago, Bjarne Stroustrup posted an interestingarticleon just this topic. The growing layers of run-time abstractions and therefore growing lack of understanding of how machines work is menacing. I’m reiterating the arguments Bjarne had:
- Datacenters use megawatts of power a day.
- Smartphone batteries last no more than a day.
- System failure is a pain, it may injure anyone physically or financially.
- A lot of the times, software bugs can’t be fixed by sending a repairman.
And it’s not like we’ve improved any. How should we deal with these? These are clear long-term problems.
Extending on Bjarne’s idea, we should get away from overreacting to human effort and start looking at yields of our solutions. The change of ideals is more important than change of tools in this respect. Related to this, Bjarne talks how infrastructure software requirements differ greatly from “regular software”. While it’s important to not get mixed up, any application that we use daily becomes infrastructure in itself, no matter how you developed it. As per Bjarne Stroustrup: “ordinary personal computers and smartphones are used as platforms for applications that also serve as infrastructure in my operational use of that word, the fundamental software of such platforms is itself infrastructure”. It’s important to keep this in mind, to not lose track where you’ve crossed the line and be prepared. This is about computing less and getting more done. No sane language makes you work more, they only require differing levels of understanding of the system being implemented.
We should stop guessing and start designing.
If you’re a software developer working on infrastructure, then the linked paper is an interesting read. You shouldn’t follow to the word or tool choice, but to the idea, that, if you cage yourself in software, you still need to get out and interact with the world. The box ends up being in the way. And this is not a call to messy, unmaintainable low-level code. This is a call for stricter tools and limiting abuse of our precious run-time.
A lot of the times we decide to sacrifice efficiency for functionality. But do you always need to? We’re probably getting used to having large abstractions, a lot of work done for us in various high-level dynamically-typed languages. And we’re also used to the fact they’re just slow. Bjarne Stroustrup goes on to talk even further about this topic. The high-level languages hide a lot of details from us, memory allocation turns unpredictable and structures are just generic containers with little control over how the program is going to actually run. This produces a great disconnect of understanding of the basic concepts of why the programs run the way they do, and solution efficiency drops greatly in the age when “our expectations of speed grow faster than the speed of computers”.
Understanding of computer architecture is paramount to attaining efficiency. We immediately talk about minimizing human effort, and we keep maximizing yield for later and often never come back to it. As one example, just by using proper design a lot of computation power can be saved in compile-time rather than run-time. Consider smart compilers and code that is able to enforce a well design. But we’re barely doing any of that. In our laid-back world, we would produce tests that execute in run-time and put that as an excuse against work rather than understand our design and implement accordingly. We rather guess the requirements, forget the design and just apply some pattern. This is error-prone and inefficient. There’s a variety of tools that let us forget design, but we can do much better than that.
What’s your understanding of computers? Do you use lists or vectors? Bjarne does a good comparison and it turns out using lists is magnitudes of times slower than vectors. Not without reason. It may depend on how many layers you’ve slapped between the keyboard and hardware, but there’s barely any reason to sacrifice so much speed in favor of having some more dummy in the design. I don’t want to promote as much as I want my smartphones’ battery to last longer. For my computer to not constantly overheat. For the cloud storage to not run off a towns’ year budget of power. In the age of garbage collectors, interpreters and dynamic typing, we should start being more careful.
Even though it’s not a perfect tool, I moved to C++, and this is not praise or bashing. There can be a much better tool for this. I just believed in the idea of having efficiency and correctness both in the same package, not because it was easier to use. I simply liked to have understanding of how my solution actually runs. I was amazed at the fact how long and interesting of a history a programming language can have. It provides immense power to implement low to high levels of abstraction safely, while also keeping tabs on proper design. But with power comes responsibility. In this respect, the crowd prefers to be irresponsible, it seems.