Many years ago, I got the book "Die Kunst, vernetzt zu denken -- Ideen und Werkzeuge für einen neuen Umgang mit Komplexität" by Frederic Vester (English translation "The Art of Interconnected Thinking -- Tools and concepts for a new approach to tackling complexity" is out of print unfortunately). Both variants were released under restrictive copyright. Vester did, probably in the context of the 70s/80s green movement, some form of "popular science", in this case promoting a cybernetical view/approach. I don't know if it's my natural style of thinking which attracts me to this kind of thing, or if it's this kind of thing that formed my style of thinking. Anyhow, from the book I "learned" (got the impression) that, for complex "problems"/situations/systems/environments, it's already hard to properly identify, analyze and understand the nature of the setup of (the) system(s), as a precondition to find a good leverage point that's also avoiding/mitigating unwanted side effects. But long before getting to any changing of systems, the book drove home to me (even if that wasn't its intention) that there's far too much details/aspects, questions, information, no good way to recognize what's important and what isn't (given the system/scenario may also dynamically change all the time), and no easy/safe/reliable way or tooling to model it in a useful fashion. The data/understanding might be wrong or incomplete (with Gödel). Now, I think Vester proposed to identify a few key factors/mechanisms (flows, feedback loops) and too developed some software that would help with the modeling. However, this software is restrictively licensed as well, so I never got even much into an analytical practice, for lack of tooling, and was/are far too worried that building a libre-freely licensed alternative might be too difficult. This might be in part because in the local library of my city I found a book that was a report by the International Conference on Climate Change the library was throwing out. Only recently I found out that the ICCC is an organization to question/doubt/discredit climate change, but back then without Internet, there wasn't much of a way to look things up. OK, maybe the ICCC goes against and challenges the traditional/conventional, "orthodox", potentially simplistic view on climate change, or maybe the entire notion altogether, I don't know. But the book/report I obtained was a critique of the many models and simulations, trying to point out that these were scientifically incomplete and too contradicting each other, asking how much then is really known (or knowable) about climate change, with most of the simulations being artificial and made up? So in my mind, this connected with Vester's description of the problem of complexity. My understanding is that "Limits to Growth" similarly is primarily about a simulation, which may suffer from the same problem, but I didn't check. I guess it's commonly understood that on a finite planet with finite resource distribution there can't be exponential growth, there's limits to growth, enforced by the constraints of reality as a dataset/model which doesn't follow the purely abstract, virtual possibilities of mathematical instruments. Similarly, with the ecosystem getting out of balance and the risk of entering an irreversible death spiral, there might be counters to that, or it's also imaginable that more primitive living/life-forms manage to sustain themselves in case the development "bottoms out" at some point that's not extinction for all biological/organic life forms. But there's no guarantee for any of this: a dead desert planet is a real option, there's no guarantee for a "happy end", as real life is not a Disney movie. In 2018, I had some exchanges with William Charlton, who had/has a project for bringing Open Space (a methodology by Harrison Owen) to the computer (see http://untiednations.com/community/plan-sos). Fine, in-presence meetings with shuffling cards around on a physical pinboard might be important, but computerized modeling/debate might be useful too. William's approach had been to use GraphML for representing relations/flows, where the current view (Point of Interest) would move along such a "chain of causality", incoming connections being causes for the current PoI and outgoing connections being the effects (which then are causes for other subsequent effects). The main idea here being that such a tool/method would help people to model + discuss together what they think the problems are, the nature/representation of the system, etc., to ideally gain a better shared understanding in order to come up with better proposals/solutions. I think William wanted to build the service/tool in ASP, and I try to avoid the Microsoft world. Furthermore, William went into systems dynamics and simulation, and I always was worried about that for the reasons mentioned above and below. But for this, I made a few things for graphs, which I was then able to later use/re-purpose for other projects. The last 1-2 years I started to buy up a few (old) books on computer simulation -- primarily for their relation to domain/object modeling and object-based/-oriented programming. Of course, I have no time to read these books, nor do I understand what's in them. During my professional training, I once worked on some equipment testing (a modular network/telecommunications routing gateway) and struggled a bit with its non-deterministic nature, as during boot-up and operation, the hardware modules/boards each produce various events and enter into different states. I can imagine deterministic simulations, but these don't represent reality well, except in very simple or strictly formal regimes. If, on the other hand, there's a source for divergence (or even randomized), that'll lead to the combinatorial/permutational explosion of many possible futures/projections, which may not be that useful for domains where the worst-case, best-case, and average/anything-in-between are kind of known, where it doesn't matter that much to generate a number of interesting variations (and then, on what basis to cherry-pick what's interesting and what isn't?). Trying to increasingly accurately model flows and interactions might lead to a better result than projections generated per computational brute-force, assuming a non-mechanistic/non-discrete world view anyway. The quantitative calculation might be misleading and less relevant than a qualitative and relative assessment. Discovery could potentially be aided by (active) inference, etc. For these reasons, I'm reluctant to jump into simulation and system dynamics. There's a number of people and projects to cite: Gene Bellinger models systems using Kumu. The Canonical Debate Lab covers argument/debate structures towards knowledge representation. On the other hand, its participants tend to each have their own platforms/services, and it's not clear to what extend some form of "canonical debate" (here for the purposes of problem/system modeling) will be created and shared/published. In the financial world, there's the example of James "Jim" Simons, see https://www.youtube.com/embed/QNznD9hMEh0 ("James Simons (full length interview) - Numberphile") and https://www.youtube.com/embed/Tj1NyJHLvWA ("James H. Simons: Mathematics, Common Sense and Good Luck"). Similarly, there's AlgoBro (https://www.twitch.tv/algobro) who's building a software system for algorithmic trading to run a new hedge fund. "Algorithmic trading" here not in the sense of high frequency trading or daytrading, and not limited to a trading bot only, but gathering market data from the data providers and algorithmically identifying anomalies and rare events, in order to potentially automatically act on other similar occurrences. From there, also consider https://documentaryheaven.com/the-midas-formula-trillion-dollar-bet about the Black-Scholes-Merton model to calculate the price of an option. It's no surprise that other fields, domains, industries are far better with modeling and simulation (besides the financial sector, it's a common method/tool to help with engineering, as that's often far cheaper, safer and faster than experimental testing) where there's a benefit/profit to be gained. Modeling/simulating systems without any or much control/influence over it can only remain an analytical exercise. I don't easily see how it would make much sense to invest all the effort/time into building models/simulations if the insights sourced from it don't translate into actionable choices, some of which indeed to be carried out in practice.