I know I'm late to the party, but I still wanted to answer this. Disclosure: I am one of the Managing Directors at Ferrous Systems
I believe you're falling victim ot a common misconception about what the FLS is and what it aims to be. It is - at least as of now - a description of the language that is good enough to certify the compiler. It was never intended to be a spec that describes Rust in completeness - for example, the FLS is absolutely insufficient to implement a Rust compiler. As such, we have no aspirations to completeness in any shape or form. While we get a lot of feedback about pieces that people consider falling short of their expectations, it is expected that there are gaps that Ferrous Systems will never try to fill. The FLS in its current shape is good enough for us.
Now that the Rust project adopted the FLS as the initial nucleus of a spec, I hope that others can contribute and fill the gaps that they need filled.
- B: a description of a model, its operations, and terminology
The other is
- X: "the only job is to describe, as clearly and completely as possible", vs
- Y: "a chance to relax and step back"
My reading of the "what nobody tells you" article is that it envisages a reference which is A and X, and an explanation which is B and Y.
I recommend that the reference should be A, B, and X. Having an explanation which is B and Y is a fine thing too.
So basically the bit I don't like in that article is where it says the reference should not attempt to explain basic concepts. Maybe it doesn't need to try to teach concepts, but it very often does need to define them.
(Also, I think rationale fits well in the "describe as clearly and completely as possible" document, especially if you're in a position to separate it a bit from the main text.)
If that is the case, then what is the explanation for NIST (according to DJB) 1. not communicating their decision process to anywhere near the degree that they vowed to, and 2. stone-walling a FOIA request on the matter?
> Whether this is a byproduct of trying to be exacting in the language used that tends to cause people interpretive problems or a specific tactic to expose those that are a combination of careless with their reading and willing to make assumptions rather than ask questions is unknown to me
Communicating badly and then acting smug when misunderstood is not cleverness (https://xkcd.com/169/).
If many people do not understand the argument being made, it doesn't matter how "exacting" the language is - the writer failed at communicating. I don't have a stake in this, but from afar this thread looks like tptacek making statements so terse as to be vague, and then going "Gotcha! That's not the right interpretation!" when somebody attempts to find some meaning in them.
In short: If standard advice is "you should ask questions to understand my point", you're doing it wrong. This isn't "HN gathers to tease wisdom out of tptacek" - it's on him to be understood by the readers (almost all of which are lurkers!). Unless he doesn't care about that, but only about shouting (what he thinks are) logically consistent statements into the void.
I think you must be talking about CVE-2010-0232, it wasn't 90 days, it was more like 180. This was at a time when Microsoft refused to release kernel patches outside of service packs. I begged Microsoft at multiple in-person meetings at Redmond to reconsider and patch, they simply refused and said there were would be repercussions if I disobeyed.
After four months of negotiations, I told that I'm going to publish it whether a patch was available or not. This didn't have the effect I had hoped, they started threatening me instead. They called me and told me my career would be destroyed. In one particularly memorable call they told me that their PR and legal department only had two settings, "off and destroy" and (in a rather menacing tone) that they would "air my dirty laundry in public". I still don't know what that means.
I was shaken, but told them I'm still going ahead. They responded by calling everyone they knew at my employer demanding I was terminated.
There was a trivial mitigation, just disabling a very rarely used feature (vdm support for 16 bit applications). I made detailed documentation explaining how to enable the mitigation for every supported platform, and even made tutorial videos for Administrators on how to apply and deploy group policy settings.
I sent these detailed instructions to all the usual places that advisories are published. I included a test case so you could verify if the bug affected you and verify the mitigation was correctly deployed. As you can imagine, Microsoft were furious.
I know it's little comfort, but through some hard fought battles over the last decade we have reached the point that Microsoft can reluctantly patch critical kernel security bugs if given around three months notice. They still pull some dirty tricks to this day, you wouldn't believe some of the stories I could tell you, but those are war stories for sharing over beers :)
It sounds like your attackers compromised you with an outdated wordpress installation, then gained privileges with this vulnerability. I'm not sure I agree the blame here lies solely with me, but regardless, I would recommend subscribing to the announce lists for the software you're deploying. You could also monitor the major security lists for advisories related to the software you use. It's high volume and varies in quality, but you can usually identify the advisories that apply to you easily.
When I file a bug report, I often feel I would like to add a disclaimer, along the lines of:
« I'm filing this bug report because I found a bug. This isn't a complaint that the bug exists, or a suggestion that you prioritise fixing it, or a support request asking for workarounds. It's just a report that the bug exists. »
But I think actually writing that would come over as rather snotty. Maybe the right thing is to write a post somewhere on what I think is the right attitude for filing bug reports, and discreetly link to it.
The view I've heard expressed is that deep thought on a piece of code reduces bugs. Whether that takes the shape of religious TDD, rigorous proofs or detailed type design doesn't make such a lot of difference.
I used to be fully bought into types, but I've since realised that they have a number of downsides that in many cases more than offset their benefits:
1. Ergonomic typesystems require a lot of work to happen at compile time and slow down the iteration time (one of the more important things for programming in my view). In my view, saving the source and seeing the result almost immediately in a browser is one of the big advantages of web development.
2. Types are almost always written in a second, much less powerful DSL and then sprinkled distractingly through the code that actually does the work. I prefer the way Haskell does this- separate the type signature out onto at least a separate line rather than mashing the two different languages together.
3. Higher levels of abstraction tend to become very hairy in many type systems (although not all). This ends up just meaning that people who like types often restrict themselves (unconciously) to less abstract programming. They spot the time they're saving by avoiding some kinds of bugs, but they don't see the time they're wasting by being unable
to talk at a higher level of abstraction. Another way this shows itself is that types are very rarely first class objects in strongly typed languages, making it very difficult to create code that operates on types, or understands types.
4. Type systems open up opportunities for type driven architecture astronauting, which is just yet another way you can go down an unproductive rabbit hole. There was an interesting study done on different teams solving problems with different languages. The differences of different teams within the same language was much bigger than between languages, but the team that made slowest progress (and without particularly having an unusually low number of bugs) was the team that leant the hardest into encoding everything in the type system.
5. Type systems encourage code generation build pipelines, which again slows iteration time and makes everyones life miserable.
6. Type systems reflect a incorrect model of the world - user input, network input, file system data is not typed. The misery that I've had with some web server frameworks that refuse to acknowlege that they don't know every possible thing that the web client might send them and are able to slot it into a predefined type. I think this is the same kind of error that we made with OO systems - thinking that we could fit the world into a predefined inheritance hierarchy.
7. Type systems encourage a static view of the world. The types of things can change under you, dynamically, (e.g. the structure of a table in a database), but in most typed languages you can't cope with that correctly without shutting down and deploying entirely new code.
8. Related to that, it's hard to imagine using a strongly typed language with the live image approach of smalltalk or sometimes used by lisp systems. This means that the popuarity of strongly typed languages is killing valuable and interesting approaches to building complex systems that emphasise observability, interaction and iteration as a way of understanding them.
There are genuine advantages to typed languages, but many of the advantages touted as being unique to typed languages can be provided by advanced linting and IDEs (intellij was surprisingly capable on plain JS + jsdoc). You can also ameliorate some of the disadvantages of untyped languages while keeping the benefits by deliberately programming in a fail-fast way.
I'm sure that type systems have their place. The research I've come across on empirical studies suggests that while there may be positive effects they are small, which does not at all mesh with the extreme partisanship I generally observe. Yes, type systems gain you something, but there seems much less awareness of what you lose.
You're absolutely right! There very much is such a scale.
Even the companies who are the very best at communications will tend to struggle to communicate clearly when filtered through a press that have strong incentives to construe everything as a data breach. Lay people reading either of your statements will tend to stop at "there's no evidence" and go "What do you mean, 'no evidence'!? I want certainty!". Witness how readily and widely this whole G+ event has been mis-reported as MASSIVE DATA BREACH.
While you're completely correct, perhaps there's room for subtlety here.
«
The solution we use at Google is that we watch for I/O errors using a completely different process that is responsible for monitoring
machine health. It used to scrape dmesg, but we now arrange to have I/O errors get sent via a netlink channel to the machine health monitoring daemon.
»
He later says that the netlink channel stuff was never submitted to the upstream kernel.
It all feels like a situation where the people maintaining this code knew deep down that it wasn't really up to scratch, but were sufficiently used to workarounds ("use direct io", "scrape dmesg") that they no longer thought of it as a problem.
We've talked to Tucker Taft about Ada and Rust before, though it was a while ago and I mostly recall that we discussed aspects of Ada's standardization process (I think Graydon was a fan of Ada's spec). Niko would probably remember more.
What an unimaginable horror! You can't change a single line of code in the product without breaking 1000s of existing tests. Generations of programmers have worked on that code under difficult deadlines and filled the code with all kinds of crap.
Very complex pieces of logic, memory management, context switching, etc. are all held together with thousands of flags. The whole code is ridden with mysterious macros that one cannot decipher without picking a notebook and expanding relevant pats of the macros by hand. It can take a day to two days to really understand what a macro does.
Sometimes one needs to understand the values and the effects of 20 different flag to predict how the code would behave in different situations. Sometimes 100s too! I am not exaggerating.
The only reason why this product is still surviving and still works is due to literally millions of tests!
Here is how the life of an Oracle Database developer is:
- Start working on a new bug.
- Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bag.
- Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug.
- Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion.
- Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete.
- Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug.
- Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours.
- Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right.
- Finally one fine day you would succeed with 0 tests failing.
- Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix.
- Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on.
- After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch.
The above is a non-exaggerated description of the life of a programmer in Oracle fixing a bug. Now imagine what horror it is going to be to develop a new feature. It takes 6 months to a year (sometimes two years!) to develop a single small feature (say something like adding a new mode of authentication like support for AD authentication).
The fact that this product even works is nothing short of a miracle!
I don't work for Oracle anymore. Will never work for Oracle again!
Thanks, I missed that. I asked around, and I'm told that Mozilla's suit was dismissed with prejudice in Santa Clara County Superior Court (original case id is 17CV319921, I think). If true this helps account for both Chris Beard termination and layoffs.
I believe you're falling victim ot a common misconception about what the FLS is and what it aims to be. It is - at least as of now - a description of the language that is good enough to certify the compiler. It was never intended to be a spec that describes Rust in completeness - for example, the FLS is absolutely insufficient to implement a Rust compiler. As such, we have no aspirations to completeness in any shape or form. While we get a lot of feedback about pieces that people consider falling short of their expectations, it is expected that there are gaps that Ferrous Systems will never try to fill. The FLS in its current shape is good enough for us.
Now that the Rust project adopted the FLS as the initial nucleus of a spec, I hope that others can contribute and fill the gaps that they need filled.