Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Can someone with some more experience explain why this happens? The most basic webserver, like a TCP socket served from a C program, serving some files etc. can take tens of thousands of requests per second, per core, on reasonably recent hardware.

At that time, it was quite common to start a new thread or even fork a new process per request/connection. Apache was prone to this:

> https://www.digitalocean.com/community/tutorials/apache-vs-n...

"mpm_prefork: This processing module spawns processes with a single thread each to handle requests. Each child can handle a single connection at a time. [...]

mpm_worker: This module spawns processes that can each manage multiple threads. Each of these threads can handle a single connection. [...]"

(this website also mentions the mpm_event MPM).

So, if you have ten thousands of processes or threads on some web server, this can easily overstrain the resources of the web server. Additionally keep in mind that at that time servers were a lot less beefy than today.



mpm_worker was notoriously bad back then and would frequently deadlock the entire Apache process. So the default was to either use prefork or some sort of monitoring to restart stuck processes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: