Some say that “complicated” is a word used as an excuse by people who are just too lazy to bother, or that designers made things artificially complicated to use or understand because they didn’t want to invest the time to make it simple, while the problem at hand isn’t inherently complex and can with reasonable effort be solved in a more simpler way. Then I learned recently that there are people who say that calling something “complex” is just a statement of the current human capacity, because “complex” problems become solved or even trivial once we manage to understand them, but while I tend to agree, I’m not that optimistic regarding human transcendence beyond their biological limitations or augmentation with tooling that there won’t remain a single problem unsolvable due to complexity in practice or theory, so I look at it more from a mathematical perspective, or what is sometimes also called “wicked problems”. Wau Holland for example, a co-founder of the Chaos Computer Club, described this category of problems like this: in earlier times, the construction of machines was relatively simple, an engineer would devise a way how the mechanical and/or electrical parts would work together. With computers, we entered a different stage as a result of miniaturization and building larger and larger systems: no single individual can keep the entire design in his head any more. If teams work on it being responsible for different parts, it’s impossible to anticipate what will happen when the parts come together. All sorts of unintended side effects might occur, as it was already predetermined that not every team member would know all the details of all the parts built by the other groups, so the interplay of those details remains unknown, a precondition for being able to build such systems in the first place. Complex systems tend to be large with many things going on inside of them that can affect each other, and humanity doesn’t have a lot of tools to properly deal with them, so they become a source of serious malfunction or even system collapse. The attempt to “repair” them might cause a whole lot of other parts to fail, or what looks fine for a long period of time can start to cause trouble as soon as a certain threshold is exceeded. With the advance in technological abilities, it became pretty apparent that we increasingly face these kind of problems not only in computing, but everywhere, be it in construction projects, social dynamics or because of our interference with the ecological environment. This was recognized of course and lead to systems theory and cypernetic thinking, computer programmers also developed a few methodologies to at least get any work done.

And still, no engineer works on introducing errors. If he would find one, he would fix it. The reason why we don’t have error-free, ideal systems is that the engineer built it exactly how he thought that it would be error-free, but didn’t know or understand all the details of the system, he built it according to his assumption how each of the parts would behave, in other words: there are no errors, just incorrect assumptions, the system always behaves exactly the way it was constructed.

How can we improve on that? Always question all assumptions? That road leads to more uncertainty, not less. It’s not only that we lack a way to learn about the unknown unknowns, it’s that we have no idea how that learning might look like at all or could be started. All we do is trying to mitigate the risk. Computer science would say that we don’t know if the algorithm would compute and we don’t know how to find out. Complexity theory knows good, seemingly simple examples to illustrate the difficulty: prime factorization (P/NP?), travelling salesman problem, Collatz conjecture, infinity and many more. It could be that “complex” is a name for a set of problems for which we, by definition, lack required prior knowledge and still have to solve them, so the ideal solution can’t be found other than by mere luck with guessing or brute force trial&error. And here we enter the philosophical question if we will ever be able to solve the last remaining problem, if we can escape our event horizon, if we can prove Gödel wrong.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.