Right, and again this is reading too much into it. The simplest thing possible does not mean the best solution. If your solution that worked really well yesterday no longer scales today, it's no longer the correct solution and will require a more complex one.
But sometimes it IS better to think a few steps ahead, rather than building a new system from scratch every time things scale up. It's not always easy to upgrade things incrementally: just look at IPv4 vs IPv6
> But sometimes it IS better to think a few steps ahead, rather than building a new system from scratch every time things scale up.
The problem is knowing when to do it and when not to do it.
If you're even the slightest bit unsure, err on the side of not thinking a few steps ahead because it is highly unlikely that you can see what complexities and hurdles lie in the future.
In short, it's easier to unfuck an under engineered system than an over engineered one.
The best way to think a few steps ahead is to make as much of your solution disposable as possible. I optimize for ease of replacement over performance or scalability. This means that my operating assumption is that everything I’m doing is a mistake, so it’s best to work from a position of being able to throw it out and start over. The result is that I spend a lot of time thinking about where the seams are and making them as simple as possible to cut.
I agree with thinking a few steps ahead. It is particularly useful in case of complex problems or foundational systems.
Also maybe simplicity is sometimes achieved AFTER complexity, anyway. I think the article means a solution that works now... target good enough rather than perfect. And the C2 wiki (1) has a subtitle '(if you're not sure what to do yet)'. In a related C2 wiki entry (2) Ward Cunningham says: Do the easiest thing that could possibly work, and then pound it into the simplest thing that could possibly work.
IME a lot of complexity is due to integration (in addition to things like scalability, availability, ease of operations, etc.) If I can keep interfaces and data exchange formats simple (independent, minimal, etc.) then I can refactor individual systems separately.
Yes sometimes. But how can you know beforehand? It’s clear in hindsight, for sure.
The most fundamental issue I have witnessed with these things is that people have a very hard time taking a balanced view.
For this specific problem, should we invest in a more robust solution which takes longer to build or should we just build a scrappy version and then scale later?
There is no right or wrong. It’s depends heavily on the context.
But, some people, especially developers I am afraid, only have one answer for every situation.
IPv6 is arguably a good example of what happens when you don't do the simplest thing possible. What we really needed was a bigger IP address space. What we got was a whole bunch of other crap. If we had literally expanded IPv4 by a couple of octets at the end (with compatible routing), would we be there now?
In a place with even less IPv6 adoption, probably. It's not like there wasn't similar proposals discussed, and there's no need to rehash the exact same discussion again.
The problem quickly becomes "how do you route it", and that's where we end up with something like today's IPv6. Route aggregation and PI addresses is impratical with IPv4 + extra bits.
The main changes from v4 to v6 besides the extra bits is mostly that some unnecessary complexity was dropped, which in the end is net positive for adoption.
It can be hard enough to fix things when some surprise happens. Unwinding complicated “future proof” things on top of that is even worse. The simpler something is, the less you hopefully have to throw away when you inevitably have to.
IPv4 vs IPv6 seems like a great example for why to keep it simple. Even given decades to learn from the success of IPv4 and almost a decade in design and refinement, IPv6 has flopped hard, not so much because of limitations of IPv4, but because IPv6 isn't backwards compatable and created excessive hardware requirements that basically require an entirely parallel IPv6 routing infrastructure to be maintained in addition to IPv4 infrastructure which isn't going away soon. It solved too far ahead for problems we aren't having.
As is IPv4s simplicity got us incredibly far and it turns out NAT and CIDR have been quite effective at alleviating address exhaustion. With some address reallocation and future protocol extensions, its looking entirely possible that a successor was never needed.
Another thing that impedes us sunken cost fallacy. Classic "Simple vs easy" change. Even if a design is comparatively simpler, it is harder to make such change for small feature.
We had a project which is supposed to convert live objects back into code with autogenerated methods. The initial design was using a single pass over the object graph and creating abstractions of HDL and combining method blocks in the same pass.
That is a big hairy code with lot of issues. Simpler would be to handle one problem at a time - method generation in one pass and then convert the methods to HDL. But, getting approval for a deployed app is so hard. Particularly when it is a completer rewrite.