Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, in the grand scheme of things, though, I wouldn't argue that seconds is a legitimate slow down. I just really struggle to buy into the argument that "non-Docker" is superior and that introducing Docker is a problem. It's _another_ way to do deployments, and it's not strictly worse. There are tradeoffs on both sides, although I would argue Docker has far fewer than just using systemctl and SSH.

> Adding an additional layer also means that layer needs to be managed at all times, and additional setup is required to use it. This starts at installing docker related tooling, having to do extra work to access logs inside containers, additional infrastructure management/maintenance (eg private repository), Docker compatibility between versions (it's not very good at maintaining that) etc

Docker is available on every major distribution. Installing it once takes seconds. Accessing logs (docker logs mycontainer) takes just as long as systemctl (journalctl -u myservice). Maintaining a registry is optional, there are dozens of one-click SaaS services you can use you instantly get a registry, many of them free. Besides, I would consider the registry to have significantly more time-savings benefits due to being able to properly track builds.

> Docker needs to copy far more than just the application files. Avoiding an extra copy of 100MB data (OS + required env) during deployment

This is only partially true. Images are layered, and if the last thing you do is copy your binary to the image (default Docker practice), than it's possible for it to be exactly the same time as it's only downloading one new layer (the size of the application). Only on brand new machines (an irrelevant category to consider) is it fully true.



I think the point is that not using Docker is easier, simpler, cheaper and better than using Docker.

Unless it is not, then you should use Docker.

But many (all?) of us have had the experience of a manager insisting on some "new thing" (LLMs are the current fad of the day) and if we are not using it we are falling behind. That is not true, as we all know.

It is very hard for the money people to manage the tech stack, but they need to, it is literally their job (at the highest level). We desperately need more engineers, who are suited (I am not!) to go into management


>I wouldn't argue that seconds is a legitimate slow down

Seconds are an eternity in the domain of computing.


But this assumes that Docker provides _no_ advantages in time-savings, which is simply false. The person who recently responded to me noted that themselves. There are several scenarios where Docker is superior, especially in cases with external dependencies.

My point is that the universal argument that Docker is inferior to manually copying binaries is flawed. It's usually put forward by people who fit in the narrow scenario where that happens to be true. If we can agree that both options have trade-offs, and that a team should pick the option that best fits their constraints, then I think that's pretty much where most of the world sits in thinking. There are extremists on both sides, but their views are just that, extreme.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: