Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

this explains virtually everything:

- that's why OOP failed - side effects, software too liquid for its complexity

- that's why functional and generic programming are on their rise - good FP implementations are natively immutable, generic programming makes FP practical.

- that's why Kotlin and Rust are in position to purge Java and C, philosophically speaking - the only things that remain are technical concerns, such as JetBrains' IDEA lock-in (that's basically the only place where you can do proper Kotlin work) as well Rust's "hostility" to other bare-metal languages, embedded performance, and compiler speed.





what is it with this take that oop is dead… even the linux kernel heavily uses OOP.

inheritance is what has been changing in scope, where it’s discouraged to base your inheritance structure on your design domain.


It always depends on the definition of OOP. Typical enterprise OOP is grounded of the idea of creating black boxes that you can't misuse. That creates silos that are hard to understand externally (and often internally as well, because their implementation tends to be composed of smaller black box objects). That practice may prevent some misuse but it creates a lot of problems globally because nobody understands what's happening anymore on the global scale. This leads to inefficiencies, both performance wise as well as development wise. Even with some understanding, there is typically so much boilerplate that changing things around becomes extremely tedious.

Actually, I have some similar concerns about powerful type systems in general -- not just OOP. Obsessing about expression and enforcement of invariants on the small scale can make it hard to make changes, and to make improvements on the large scale.

Instead of creating towers of abstraction, what can work better often is to try and structure things as a loosely coupled set of smaller components -- bungalows when possible. Interaction points should be limited. There is little point in building up abstraction to prevent every possible misuse, when dependencies are kept in check, so module 15 is only used by 11 and 24. The callers can easily be checked when making changes to 15.

But yeah -- I tend to agree with GP that immutability is a big one. Writing things once, and making copies to avoid ownership problems (deleting an object is mutation too), that prevents a lot of bugs. And there are so many more ways to realize things with immutable objects than people knew some time ago. The typical OOP codebase from the 90s and 00s is chock-full with unnecessary mutation.


> the idea of creating black boxes that you can't misuse

Could you please expand upon your idea, particularly the idea that creating (from what I understood) a hierarchical structure of "blackboxes" (abstractions) is bad, and perhaps provide some examples? As far as I understand, the idea that you compose lower level bricks (e.g. classes or functions that encapsulate some lower level logic and data, whether it's technical details or business stuff) into higher level bricks, was what I was taught to be a fundamental idea in software development that helps manage complexity.

> structure things as a loosely coupled set of smaller components

Mind elaborating upon this as well, pretty please?


> Could you please expand upon your idea that [..] a hierarchical structure of "blackboxes" [...] is bad?

You'll notice yourself when you try to actually apply this idea in practice. But a possible analogy is: How many tall buildings are around your place, what was their cost, how groundbreaking are they? Chances are, most buildings around you are quite low. Low buildings have a higher overhead in space cost, so especially in denser cities, there is a force to make buildings with more levels.

But after some levels, there are diminishing returns from going even higher, compared to just creating an additional building of the same size. And overhead is increasing. Higher up levels are more costly to construct, and they require a better foundation. We can see that most higher buildings are quite boring: how to construct them is well-understood, there isn't much novelty. There just aren't that many types of buildings that have all these properties: 1) tall/many levels 2) low overall cost of creation and maintenance 3) practical 4) novel.

With software components it's similar. There are a couple of ideas that work well enough such that you can stack them on top of each other (say, CPU code on top of CPUs on top of silicon, userspace I/O on top of filesystems on top of hard drives, TCP sockets on top of network adapters...) which allows you to make things that are well enough understand and robust enough and it's really economical to scale out on top of them.

But also, there isn't much novelty in these abstractions. Don't underestimate the cost in creating a new CPU or a new OS, or new software components, and maintaining them!

When you create your own software abstractions, those just aren't going to be that useful, they are not going to be rock-solid and well tested. They aren't even going to be that stable -- soon a stakeholder might change requirements and you will have to change that component.

So, in software development, it's not like you come up with rock-solid abstractions and combine 5 of those to create something new that solves all your business needs and is understandable and maintainable. The opposite is the case. The general, pre-made things don't quite fit your specific problem. Their intention was not focused to a specific goal. The more of them you combine, the less the solution fits and the less understandable it is and the more junk it contains. Also, combining is not free. You have to add a _lot_ of glue to even make it barely work. The glue itself is a liability.

But OOP, as I take it, is exactly that idea. That you're creating lots of perfect objects with a clear and defined purpose, and a perfect implementation. And you combine them to implement the functional requirements, even though each individual component knows only a small part of them, and is ideally reusable in your next project!

And this idea doesn't work out in practice. When trying to do it that way, we only pretend to abstract, we just pretend to reuse, and in the process we add a lot of unnecessary junk (each object/class has a tendency to be individually perfected and to be extended, often for imaginary requirements). And we add lots of glue and adapters, so the objects can even work together. All this junk makes everything harder and more costly to create.

> structure things as a loosely coupled set of smaller components

Don't build on top of shoddy abstractions. Understand what you _have_ to depend on, and understand the limitations of that. Build as "flat" as possible i.e. don't depend on things you don't understand.


Thanks a ton! While I don't have the experience to understand all of it, I appreciate your writing, like the sibling poster (and that you didn't delete your comment)!

It reminds me of huge enterprise-y tools, which in the long run often are more trouble than they're worth (and reimplementing just the subset you need perhaps would be better), and (the way you speak about OOP) bloated "enterprise" codebases with huge classes and tons of patterns, where I agree making things leaner and less generic would do a lot of good.

At first however I thought that you're against the idea of managing complexity by hierarchically splitting things into components (i.e. basically encapsulation), which is why I asked for clarification, because this idea seems fundamental to me, and seeing that someone is against it got me interested. I think now though that you're not against this idea, and you're against having overly generic abstractions (components? I'm not sure if I'm using the word "abstractions" correctly here) in your stack, because they're harder to understand, which I understand. I assume this is what blackbox means here.

Does it sound correct?


I'm not at all about decomposition and encapsulation. But I do think that the idea of _hierarchical_ decomposition can easily be overdone. The hierarchy idea might be what leads to building "on top" of leaky abstractions.

> When you create your own software abstractions, those just aren't going to be that useful, they are not going to be rock-solid and well tested. They aren't even going to be that stable -- soon a stakeholder might change requirements and you will have to change that component.

I also think it's about how many people you can get to buy-in on an abstraction. There probably are better ways of doing things than the unix-y way of having an OS, but so much stuff is built with the assumption of a unix-y interface that we just stick with it.

Like why can't I just write a string of text at offset 0x4100000 on my SSD? You could but a file abstraction is a more manageable way of doing it. But there are other manageable ways of doing it right? Why can't I just access my SSD contents like it's one big database? That would work too right? Yeah but we already have the file abstraction.

>But OOP, as I take it, is exactly that idea. That you're creating lots of perfect objects with a clear and defined purpose, and a perfect implementation. And you combine them to implement the functional requirements, even though each individual component knows only a small part of them, and is ideally reusable in your next project!

I think OOP makes sense when you constrain it to a single software component with well defined inputs and outputs. Like I'm sure many GoF-type patterns were used in implementing many STL components in C++. But you don't need to care about what patterns were used to implement anything in <algorithm> or <vector>. you just use these as components to build a larger component. When you don't have well defined components that just plug and play over the same software bus, no matter how good you are in design patterns it's gonna eventually turn into spagetti un-understandable mess.

I'm really liking your writing style by the way, do you have a blog or something?


I think I agree with your "buy-in idea", but adding that the Unix filesystem abstraction is almost as minimal as it gets, at least I'm not aware of a simpler approach in existence. Maybe subtract a couple small details that might have turned out as not optimal or useful. You can also in fact write a string to an offset on an SSD (open e.g. /dev/sda), you only need the necessary privileges (like for a file in a filesystem hierarchy too btw).

A database would not work as mostly unstructured storage for uncoordinated processes. Databases are quite opinionated and require global maintenance and control, while filesystems are less obtrusive, they implement the idea of resource multiplexing using a hierarchy of names/paths. The hierarchy lets unrelated processes mostly coexist peacefully, while also allowing cooperation very easily. It's not perfect, it has some semantically awkward corner cases, but if all you need is multiplexing a set of byte-ranges onto a physical disk, then filesystems are a quite minimal and successful abstraction.

Regarding STL containers, I think they're useful and useable after a little bit of practice. They allow you to get something up and running quickly. But they're not without drawbacks and at some point it can definitely be worthwhile to implement custom versions that are more straightforward, more performant (avoiding allocation for example), have better debug performance, have less line noise in their error messages, and so on. The most important containers in the STL are quite easy to implement custom versions with fewer bells and whistles for. Maybe with the exception of map/red-black tree which is not that easy to implement and sometimes the right thing to use.


> I'm really liking your writing style by the way, do you have a blog or something?

Thank you! I don't get to hear that often. I have to say I was almost going to delete that above comment because it's too long, the structure and build up is less than clear, there are a lot of "just" words in it and I couldn't edit anymore. I do invest a lot of time trying to write comments that make sense, but have never seen myself as a clear thinker or a good writer. To answer your question, earlier attempts to start a blog didn't go anywhere really... Your comment is encouraging though, so thanks again!


OOP is different things to different people, see e.g. [0]. Many types of OOP that were popular in the past, are, indeed, dead. Many are still alive.

[0] https://paulgraham.com/reesoo.html


I'd personally declare dead everything except 3 and 4 because, unlike the rest, polymorphism is genuinely useful (e.g. Rust traits, Kotlin interfaces)

trivia: Kotlin interfaces were initially called "traits", but with Kotlin M12 release (2015), they renamed it to interfaces because Kotlin traits basically are Java interfaces. [0]

[0]: https://blog.jetbrains.com/kotlin/2015/05/kotlin-m12-is-out/...


1 is about encapsulation, that makes it really easy to unit test stuff. Say you need to access a file or a database in your test, you could write an abstraction on top of file or db access and mock that.

2 indeed never made sense to me since once everything is ASM "protected" means nothing, and if you can get a pointer to the right offset you can read "passwords". This claim of enforcing what can and cannot be reached from a subclass to help security never made sense to me.

3 i never liked function overloading, prefer optional arguments with default values.. if you need a function to work with multiple types of one parameter, make it a template and constrain what types can be passed

7 interfaces are a must have for when you want to add tests to a bunch of code that has no tests.

8 rust macros do this, and it's a great way to add functionality to your types without much hassle

9 idk what this is


I'm all for FP, but there's no way the mainstream is taking side-effects seriously.

Neither Kotlin nor Rust cares about effects.

Switching to Kotlin/Rust for FP reasons (and then relying on programmer discipline to track effects) is like switching to C++ for RAII reasons.


side effects are not necessarily a bad thing. unintentional side effects are. with some exceptions, such as UI frameworks, I find it harder to unintentionally create a side effect. also, UI is basically one huge side effect.

Kotlin and Rust are just a lot more practical than, say, Clojure or Haskell, but they both take lessons from those languages.


> side effects are not necessarily a bad thing. unintentional side effects are

Right. Just like Strings aren't inherently bad. But languages where you can't tell whether data is a String or not are bad.

No language forbids effects, but some languages track effects.


I wouldn't say OOP failed in any general sense. There seem to these strawman exceptions and Convay's law is often what actually is the problem.

Having tried Kotlin in IDEA, I must admit their refactoring tools for Java are miles ahead of Kotlin.

I don't know how strong lock in is.


near-infinite until they finish Kotlin LSP alpha.

also, hot take: Kotlin simply does not need this many tools for refactoring, thanks in part to the first-class FP support. in fact, almost every non-Android Kotlin dev I have ever met would be totally happy with analysis and refactoring levels on par with Rust Analyzer.

but even with LSP, I would still need IDEA (at least Community) to perform Java -> Kotlin migration and smooth Java interoperability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: