Notably, they forgot to improve on readability and maintability, both of which are markedly worse with perl.
Look I get people use the tools they use and perl is fine, i guess, it does its job, but if you use it you can safely expect to be mocked for prioritizing string operations or whatever perl offers over writing code anyone born after 1980 can read, let alone is willing to modify.
For such a social enterprise, open source orgs can be surprisingly daft when it comes to the social side of tool selection.
Would this tool be harder to write in python? Probably. Is it a smart idea to use it regardless? Absolutely. The aesthetics of perl are an absolute dumpster fire. Larry Wall deserves persecution for his crimes.
Did you miss the post a few above yours, where an author of this tool explained why it’s written in Perl? Introducing a new language dependency for a build, especially of an OS, is not something you undertake lightly.
Right. Good luck finding people who want to maintain that. It just seems incredibly short-sighted unless the current batch of maintainers intend to live forever.
Eh, finger pointing does nobody any good, emphatically including this comment. Finger pointing towards someone who actually found a vulnerability is just bleak. I would not willingly associate with anyone who engaged in such behavior.
Maintaining software is hard, but this does not imply a right to be babied. People should simply lower their expectations of security to match reality. Vulnerabilities happen and only extremely rarely do they indicate personal flaws that should be held against the person who introduced it. But it's your job to fix them. Stop complaining.
>Finger pointing towards someone who actually found a vulnerability is just bleak. I would not willingly associate with anyone who engaged in such behavior.
Nobody is "finger pointing" Rachel for the vulnerability. They're calling her out for how she communicated it. I feel that's totally justified. For instance if someone found a critical RCE, but the report was a barely coherent stream of consciousness, it's totally fine to call the latter part out. That's not "finger pointing".
>But it's your job to fix them. Stop complaining.
It's the developers job to respond to bug reports in the form of vaguely written blog posts?
>But everyone is grilling the author for publishing
What's the alternative? Having no quality bar for vulnerability reports, and give no pushback for poorly written vulnerability reports, even if they're crayon scribbles on a napkin? I agree that not everyone can write a detailed and thoroughly researched bug report like the ones project zero puts out, but I think most can agree that "you might want to stop using [software]" is well below any reasonable quality bar.
>Maybe they should sell it next time, no negative reaction that way
Yeah I'm sure 0day groups are going to be paying top dollar for weird crashes.
How is this not justified? For that matter, how was rachelbythebay's first post not fingerpointing? "You might not want to be associated with nukem222, not even a little bit, if you know what I mean".
Fingerpointing is bad, but we have to have an honest conversation.
One person posted the vague post. They clearly did not expect the reaction it got, though they could have anticipated some of it, they are aware their blog is widely read. Their reaction is commendable, to quickly post a followup appealing for calm and sharing some details, to quell the problems caused by the intense vagueness.
What people from HN did, because of the vagueness, was assume this a super-secret-squirrel mega-vulnerability and Rachel is gagged by NDAs or the CIA or whatever... and they've gone off and harrassed the developers of atop while trying to find the issue.
Imagine a person of note saying "the people at 29 Acacia Road are suspicious", then a mob breaks down the door and start rifling through all the stuff there, muttering to themselves "hmm, this lamp looks suspicious... this fork looks suspicious"... absolute clowns, all of them.
No, you dummies, it's not going to be in the latest commit, or easily greppable.
This is exactly why CVEs, coordinated disclosure, and general security reporting practises exist. So every single issue doesn't result in mindless panic and speculation.
There's now even a CVE purely based on the vaguepost, assigned to a reporter who clearly knows fuck all about what the problem is: https://www.cve.org/CVERecord?id=CVE-2025-31160 - versions "0" through "2.11.0" vulnerable, eh? That would be all versions, and the reason the reporter chose that is because they don't know which versions are vulnerable, and they don't know what it's vulnerable to either. But somehow, "don't know", the absence of information, has become a concrete "versions 0 to 2.11.0 inclusive"... just spreading the panic.
I don't know why Rachel is vagueposting, but I can only hope she has reported this correctly, which is to:
1. Contact the security of the distro you're using. e.g. if you're using atop on debian, then email security@debian.org with the details.
2. Allow them to help coordinate a response with the packager, the upstream maintainer(s) if appropriate, and other distros, if appropriate. They have done this hundreds of times before. If it's critically important, it can be fixed and published within days, and your worries about people being vulnerable because you know something they don't can be relieved, all the more quickly.
I commend you for writing what you think should be done and not just complaining about what was done. It is more helpful to express the correct procedure than to only label things as the wrong procedure.
I never quite understood why computing is so different from literally all other branches of reality. Systems need to be secure, I get it. But if we have a bunch of folks dedicating their life to breaking your shit I don't get how that is in any way acceptable and why the weight of responsibility solely lies with people responsible for security.
We apparently have a society/world that normalizes breaking everyone's shit. That's not normal - IMO.
If I break into a factory or laboratory of some kind and just walk out again I have not found a "vulnerability" and I certainly won't be remunerated or awarded status or prestige in any way shape or form. I will be prosecuted. Everyone can break into stuff. It's not that stuff is unbreakable, it's that you just don't do that because the consequences are enormous (besides obvious issues with morality). Again, breaking stuff is the easy part.
I am certainly completely ignorant and should be drawn and quartered for it, but for me it is hard to put my finger where I'm so wrong.
I can see how the immaterial nature of software systems changes the nature of the defense, but I don't see how it immediately follows that breaking stuff that's not allowed to be broken by you is suddenly the norm and nothing can be done against that. We just have to shrug and accept our fate?
Leaving aside the ethics of vulnerability research in server-side software, you're neglecting the fact that atop runs on your own machine.
So it's not like breaking into a factory. It's like noticing that your dishwasher makes the deadbolts in your house stop working (yes...a weird analogy--there are ways software isn't like physical appliances).
Surely you have the right to explore the behavior of your own house's appliances and locks, and the manufacturer does not have the right to complain.
As for server side software, I think the argument is a simple consequentialist one. The system where vulnerability researchers find vulnerabilities and report them quietly (perhaps for a bounty, perhaps not) works better than the one where we leave it up to organized crime to find and exploit those issues. It generates more secure systems, and less harm to businesses and users.
I'm sorry for being the ignoramus and lazy ass that I am, having not read a single sentence of the article.
You are, of course, right. Examining stuff to be brought into your own home is categorically different from meticulously analyzing and publishing the security vulnerabilities of your local power plant.
I can get behind the consequentialist argument. Sometimes we've just gotta go with what works, but I wonder if we give up too easily..
If I buy a physical product, take it home, and then publish the various issues I find with it then ... nobody has a problem with that
I'm as sad as the next guy that the safe and trusting internet of academia is long gone, but the generally accepted view nowadays is that it's absolutely full to the gills with opportunistic criminals. Letting people know that their software is insecure so they don't get absolutely screwed by that ravening horde is a public service and should be appreciated as such.
Pen testing third party systems is a grey area. Pen testing publicly available software in your own environment and warning of the issues is not, particularly when the disclosure is done with care.
I agree and conforming to HN rules, guidelines and established practices I did not, in fact, read or engage with the article at all (and I apologize).
Your view is one I agree with completely for a device bought to bring into your own home.
What I find less understandable is how finding (and exploiting) security flaws in publicly facing structures is normalized to the degree that it is. I can easily analyze some public stucture and publish detailed records on how you would most efficiently break into my local hardware store. I'm not sure I'm seeing the net win for society.
How is it better to not look into or share such information when we know that a vast army of assholes are doing the same thing for nefarious purposes?
Yes, they might not spot it themselves, but we know that in practice they often do and the results are horrible. If we stop looking then they will definitely be the first to find vulnerabilities - as it is they are only sometimes the first (and the vulnerabilities they find are likely to be the lesser appalling ones).
Privately sharing the issue with the authors lets them fix it in a timely way, publicly announcing the issue after a reasonable period of time incentivises them to do so - corporate authors often won't bother unless their arms are twisted.
If those black-hat hackers were not really out there then I might agree with you, but they are, and they don't care that we don't like it.
In a way I am definitely seeing your perspective here. Letting "good guys" win this race ocassionally is an improvement over never letting them win.
It's just that I think we can do better, because I think the web is a hostile, vitriolic open sewer and must be governed properly before civilized business can be conducted on it. It was perhaps a great innovative place, but it now is a dumpster fire causing endless headaches and beyond redemption. I think it's time to face this reality instead of trying to dress up the turd.
Are you not aware that the internet is an international artefact? Will you institute a Great Firewall to prevent your citizens from seeing outside your borders?
An inconvenient question I often ask about proposed architecture changes is: "How will you get there from here?" - if you can't answer it then it's not going to happen.
> if you can't answer it then it's not going to happen.
My point is that if we as a technical community don't start looking outside our technical bubble the powers that be will at some point figure out a way to get there from here and without consulting you (us). But maybe that's wrong and maybe nothing's going to need changing. I hope so.
Well also in the real world, if you look at history, people DID exploit the neighbouring tribe with impunity if they could not defend themselves ("what idiots don't have a guard during night"), or built stone fortresses with 3 metre stone walls.
When living under those conditions, people probably did put the responsibility to be safe on the victim..
We have been able to remove this waste due to the introduction of the national state, laws, "monopoly on violence", police...
It is THOSE things that allows the factory in your analogy to not spend resources on a 3 metre stone wall and armed guards 24/7.
Now on the internet the police, at least relatively to the physical world, almost completely lack the ability to either investigate or enforce anything. They may use what tools they can, but it does not give them much in the digital world compared to the physical.
If we want internet to be like the real world in this respect, we would have to develop ways to let the police see a lot more and enforce a lot more. Like they can in the physical world.
> If we want internet to be like the real world in this respect, we would have to develop ways to let the police see a lot more and enforce a lot more. Like they can in the physical world.
I agree and it's exactly this that's often so violently opposed by the technical community who are routinely frothing at the mouth at the suggestion that law enforcement needs access to be able to function while that community, and often especially that community with their fancy, expensive lives, enjoys widespread, comfortable physical and legal protection afforded by that very same law enforcement which is only made possible by this agency having far-reaching legal and lethal powers.
It can be abused and it will be abused, but I guess it comes down to do we want comfortable lives or do we want to be free?
IMO it's a matter of time before some nation-state level actor will unleash a digital shit-storm of astronomic proportions which will necessitate swift political decisions and it's my guess we better have an open, realistic discussion about it now instead of then.
"If I break into a factory or laboratory of some kind and just walk out" This is a weak analogy. In the situation you describe, right-and-wrong is easily understood by the layman, there is a common legal framework, there is muscle to enforce the legal framework.
In the computing space - if someone breaks the rules, it is only a bunch of us that understand what rule was broken, and even then we are likely to argue over the details of it. The people doing the breaks are often anonymous. There is no shared legal framework, or enforcement, or courts. The consequences of a break are usually weak. Consider the lack of jail time for anyone involved with Superfish. Many of these people were located in the developed world.
The computing world often resembles the lawlessness of earlier eras - where only locally-run fortifications separated civilian farmers from barbarian horsemen. A breach in this wall leads to catastrophe. It needs to be unbreakable. People who maintain fortifications shoulder a heavy responsibility.
Maybe it's more like analyzing and publishing the security vulnerabilities of said factory or laboratory. It's not trivially right or wrong to do so. It seems acceptable, because you are helping them make it more secure (right?) yet most societies are quite adamant that it's not, in fact, normal - and legal - to do so. You'll get yourself in quite a bit of trouble if you do that.
Just moving to Nigeria and publishing security bulletins on how to break into Walmarts is still a shaky proposition, but perhaps it's safer than I think it is. The international judiciary is opaque to me.
> The computing world often resembles the lawlessness of earlier eras - where only locally-run fortifications separated civilian farmers from barbarian horsemen. A breach in this wall leads to catastrophe. It needs to be unbreakable. People who maintain fortifications shoulder a heavy responsibility.
Sounds about right. I'm not too happy about it, although I guess this particular era has its advantages as well.
Lockpicking is probably a close analogy; and that is an perfectly accepted and legal hobby in all western countries, with thousand of youtube videos on how to pick common locks.
Computing is actually different. There are laws for example in Germany ("Hackerparagraph") that make it illegal to produce "hacking" tools.
We can lock down the Internet so hard that every IP packet is associated with a physical address, then go and arrest people who allow bad packets to be sent from their address. This is what many governments are persistently trying to do. Is it a good idea?
Not sure, but I don't see why we can't have a civil discussion about it and I'm not seeing much of that.
It's either A) we The People are completely free and nobody can intervene in any way or B) The Government is a tyrannical overlord that controls every packet that dares to enter the internet.
Absolute freedom never was and never will be a good idea. If we don't at least talk about it, somehow, someday, and maybe quite soon, They will ram it down our throats and we'll end up closer in scenerio B than A.
The internet reminds me in many ways of the international road network. There are clear boundaries and there are checks and, yes, they suck. It's not a complete free for all, yet it's workable. I know this analogy breaks down eventually, but I'm wondering if there's some middle ground here.
I guess I am jaded by some branches of "hacker culture" with a proclivity for taking pride in activities or mindsets - breaking in, finding exploits, destruction - that I don't find particularly palatable without understanding the social and eventual political backlash that will strip away your freedoms faster than you can say "papers, please".
> It's either A) we The People are completely free and nobody can intervene in any way or B) The Government is a tyrannical overlord that controls every packet that dares to enter the internet.
Do you mean the various efforts to weaken crypto stuff? I don't think I object in principle to law enforcement having access to the information for law enforcement purposes, but we know that any kind of access is subject to scope creep particularly when you lower the threshold for that access. First it's to enforce reasonable laws, then it's to enforce unreasonable laws, then it's because someone bribed a policeman. Not necessarily in that order.
Besides, the main problem with safety on the internet is not that law enforcement has no tools, it's that the crimes cross political borders. You can (in principle) identify the culprit in Russia easily enough when the money is laundered, but how exactly do you plan to bring them to justice?
Where I live, the police have raided people's homes for protesting things Israel did. And when I was a victim of an actual violent crime, they kept saying how they should arrest me - according to demographic profiling, I was the perpetrator (I was there, I wasn't), all the way up to the courtroom where the actual perpetrator barely avoided prison time. So no, I don't really trust them to access my private anything. Any society with a hope of stability obviously needs some way to enforce laws, but this isn't it.
But even if the police where you live were perfect, handing them the keys to the internet wouldn't resolve crimes committed outside their jurisdiction.
I see why the idea is appealing to politicians, but even they ought to think twice about the risks inherent in third parties accessing their most private communications - given that whatever sides of the political aisles they sit upon they are likely to be much more interesting targets to better resourced assailants than us average schmucks.
The analogy is not perfect, but physically the police already has extreme powers and they can (and ocassionally are) abused, but that's the price you pay for protection. If we don't, we accept bad guys will be running all over us for eternity and everybody and their mothers has to have, at minimum, a couple of AK-47s for basic safety.
I second this. The pompous holier-than-thou I-know-better attitude some members of the computer security community has always rubbed me the wrong way. This behaviour of complaining is a manifestation of the typical “putting down” and dismissing someone who isn’t part of the tribe.
I agree with all of this. I want to offer a tiny bit more hope, though:
> There have been a lot of bold promises (and genuine advances), but I don't see a world in the next 5 years where AI writes useful software by itself.
I actually think the opposite: that within five years, we will be seeing AI one-shot software, not because LLMs will experience some kind of revolution in auditing output, but because we will move the goalposts to ensure the rough spots of AI are massaged out. Is this cheating? Kind of, but any effort to do this will also ease humans accomplishing the same thing.
It's entirely possible, in other words, that LLMs will force engineers to be honest about the ease of tasks they ask developers to tackle, resulting in more easily composable software stacks.
I also believe that use of LLMs will force better naming of things. Much of the difficulty of complex projects comes from simply tracking the existence and status of all the moving parts and the wires that connect them. It wouldn't surprise me at all if LLMs struggle to manage without a clear shared ontology (that we naturally create and internalize ourselves).
It’s fascinating how the debate is going exactly as the car debate went. People were arguing for a whole spectrum of environment modifications for self driving cars.
I’ll take the other side of that bet. The software industry won’t make things easier for LLMs. A few will try, but will get burned by the tech changing too fast to target. Seeing this, people will by and large stay focused on designing their ecosystems for humans.
> we will move the goalposts to ensure the rough spots of AI are massaged out
Totally agree with this point. Software engineering will adapt to work better with LLMs. It will influence how we think about programming language design, as an interface to human readers/writers as well as for machines to "understand", generate, and refine.
There was a recent article about how LLMs will stifle innovation due to its cutoff point, where it's more productive using older or more mature frameworks and libraries whose documentation is part of the training data. I'm already seeing how this is affecting technical decisions at companies. But then again, it's similar to how such decisions are often made to maximize the labor pool, for example, choosing a more popular language due to availability of experts.
One thing I hope for is that we'll see more emphasis on thorough and precisely worded documentation. Similarly with API design and user interfaces in general, possibly leading to improvements in accessibility also.
Another aspect I think about is the recursive cycle of LLM-generated code and documentation being consumed by future LLMs, influencing what kinds of new frameworks and libraries may emerge that are particularly suited for this new kind of programming, purely AI or human/AI symbiosis.
> One thing I hope for is that we'll see more emphasis on thorough and precisely worded documentation.
Being on this planet long enough, I've learned this won't happen and in fact the quality of such will degrade making the AI using them degrade and we'll all have to just accept these flaws and work around them like so many myriad technical flaws in our current systems today
True, that's a probable and very real risk that this recursive cycle of LLMs consuming LLM-produced content will degrade the overall quality of its "intelligence" and "creativity", maybe even the quality of literature, language, and human thought itself. By the time we collectively realize the mistake, it will be too late. Like microplastics, it will be everywhere and inside everything, and we'll just have to learn to live with it for better or worse.
Readers should note that it's super buggy. Occasionally your music will be unavailable, even though it can match it to tracks that ARE available on apple music (it just says "not available in your region" and is greyed out). Sometimes it also "matches" to the clean version of the music (may god damn people who think that censoring music is a good idea) and there's no way to correct it. I've lost an impressive amount of music just forgetting what's mine and accidentally deleting it when trying to figure out why it's not playing.
It was half-baked when they shipped it and it still has the same launch bugs. Still, it's better than spotify.
It sounds like you're confusing iTunes Match and iCloud music library. I've been using iTunes Match for about 15 years and never had any issues. It's not matching your stuff with Apple Music (which didn't exist when Match launched).
I personally think you're both right, but it depends on when you first started using Blender. You're 100% correct that interfaces at the time when Blender was released were completely experimental, and I'd compare it to trying to navigate AI as an uninformed end user when it first was introduced, versus now. That's where I would put Blender when it was first released. The interface was good for what it was compared to the other popular 3D software at the time, but it's so much better and more evolved now. I'd say that you mentioning shortcuts makes you a power user, which should be the goal when you use any piece of software for your career. I just don't think that people bother to learn as much as you did back when you started using Blender, which I think shows your dedication, but also shows a lack of deep knowledge most (not all) people that start with a new technology have currently. It's not unknown to get a fresh graduate or self-taught person that wants to deep dive into the software they make use of daily, but I feel like it's far more uncommon that it was 20 years ago or more.
> An incredible amount of effort and ingenuity has gone into CSV parsing because of its ubiquity.
Yea and it's still a partially-parseable shit show with guessed values. But we can and could have and should have done better by simply defining a format to use.
Schemaless can be accomplished with well-formed formats like json, xml, yaml, toml, etc. from the producer side these are roughly equivalent interfaces. There's zero upside to using CSVs except to comfort your customer. Or maybe you have centered importing of CSVs into your actual business, in which case you should probably not exist.
I would download it, but I wanted to be productive today.