I feel like that's an issue not with what they said, but what they did. It would be better for them to have checked this quickly, but it would have been worse for them to have they did when they hadn't. What you're saying isn't wrong, but it's not really an answer to the question you're replying to.
I'd argue that there's a schema; it's just defined dynamically by the queries themselves. Given how much of the industry seems fine with dynamic typing in languages, it's always been weird to me how diehard people seem to be about this with databases. There have been plenty of legitimate reasons to be skeptical of mongodb over the years (especially in the early days), but this one really isn't any more of a big deal than using Python or JavaScript.
As someone who has done a lot of Ruby coding I would say using a statically typed database is almost a must when using a dynamically type language. The database enforces the data model and the Ruby code was mostly just glue on top of that data model.
Yes there's a schema, but it's hard to maintain. You end up with 200 separate code locations rechecking that the data is in the expected shape. I've had to fix too many such messes at work after a project grinded to a halt. Ironically some people will do schemaless but use a statically typed lang for regular backend code, which doesn't buy you much. I'd totally do dynamic there. But DB schema is so little effort for the strong foundation it sets for your code.
Sometimes it comes from a misconception that your schema should never have to change as features are added, and so you need to cover all cases with 1-2 omni tables. Often named "node" and "edge."
The adage I always tell people is that in any successful system, the data will far outlive the code. People throw away front ends and middle layers all the time. This becomes so much harder to do if the schema is defined across a sprawling middle layer like you describe.
We just sit a data persistence service infront of mongo and so we can enforce some controls for everything there if we need them, but quite often we don’t.
It’s probably better to check what you’re working on than blindly assuming this thing you’ve gotten from somewhere is the right shape anyway.
I think the argument they're making is that once you have this, you already have an easy way to test things that doesn't require bringing in an entire framework.
The point of Rust is ostensibly to provide a safer version of C++-like semantics, not necessarily to avoid the same level of complexity. Especially if you're directly using unsafe code (which is necessary in some cases, like FFI), it's not really clear to me that Rust was "meant" to be doing something wildly different here. The large majority of the code not needing to use unsafe will still be better off even if this type of thing is necessary in some places.
(To preempt any potential explanations about this: yes, I understand the reference being made with that quote. I just don't think it actually applies here at all)
This is all well and good, but "zero cost abstractions" implies, well, that you're crawling _up_ the abstraction pyramid, not lost in twisty little side passages.
I've never understood that to mean that every possible abstraction would be zero cost, but that the language itself provides abstractions that are zero cost. I'm not sure how you could avoid having external libraries ever implement an abstraction with a cost. I honestly can't tell from this article alone what library the type `FFISafe` is from, but it's not from std.
(As an side someone whose spent a lot of time in the ecosystem might be able to infer this is likely a third-party library just from the fact that the capitalization is not in the typical format that official things use; if it were in std, I'd expect it to be FfiSafe, similar to how IoSlice[1], Arc[2], Shl[3], TypeId[4] etc. don't treat separate parts of an abbreviation/acronym/initialism as word breaks).
> I've never understood that to mean that every possible abstraction would be zero cost, but that the language itself provides abstractions that are zero cost.
To me, "zero cost abstractions" means that you should use the language, because even the fancy abstractions it provides are free. Which presupposes that the language designers _believe_ that abstractions _and_ performance are both _important_.
Judging by all the puff pieces on the language, I don't think I'm alone in this belief of what the average Rustacean holds to be true.
> I'm not sure how you could avoid having external libraries ever implement an abstraction with a cost.
But the point is that the exact same external library code has a _lower_ cost if called from the non-abstract horror of a language C, and the (partial) fix for that is really ugly looking low-level Rust crap that that took a _long_ time to figure out and is the exact opposite of an abstraction, and the full fix is not even known yet.
Yes, we all know that abstractions sometimes obscure what is really going on, but the tradeoff is that the code is shorter and prettier, and easier to understand at a higher level. That's... not what is going on here.
> But the point is that the exact same external library code has a _lower_ cost if called from the non-abstract horror of a language C, and the (partial) fix for that is really ugly looking low-level Rust crap that that took a _long_ time to figure out and is the exact opposite of an abstraction, and the full fix is not even known yet.
No, it's not the exact same external library. There's an additional Rust library in between that they used, which provides the `FFISafe` type, and that has overhead. This is not a standard Rust library, or even one that I'm able to find within a few minutes of googling despite having used Rust for over a decade. It's not clear to me why you think this is necessarily representative of Rust rather than one specific library; someone could just as easily wrap a C library in another C library in a way that adds overhead, and it would be equally nonsensical to cite that as an argument that C isn't efficient.
Your argument seems to boil down to "Rust claims to be efficient, but it's possible for someone to write inefficient code, and for me to use that code, so therefore those claims are wrong".
> There's an additional Rust library in between that they used, which provides the `FFISafe` type, and that has overhead.
Look, I wrote " But the point is that the exact same external library code has a _lower_ cost if called [from C]" and that remains a true statement. It's pretty obvious that I was referring to the shared code, otherwise, it wouldn't have ever been called from C, right?
The profiler showed that the identical assembly language itself was taking more cycles when called from C.
> There's an additional Rust library in between that they used, which provides the `FFISafe` type, and that has overhead. This is not a standard Rust library, or even one that I'm able to find within a few minutes of googling despite having used Rust for over a decade.
The point is that they did the absolute normal expected thing in Rust, and it slowed down the external assembly language library, and after a _lot_ of digging and debugging, they changed the Rust code to be a lot less flexible and more convoluted, and now the assembly language is almost as fast as when it is called from C.
> Your argument seems to boil down to "Rust claims to be efficient, but it's possible for someone to write inefficient code, and for me to use that code, so therefore those claims are wrong".
No, that's not my argument at all. Look, the people doing this _obviously_ know Rust, and it took them a _long_ time, and some _really_ ugly concrete low-level concrete code, to take external code and make it perform almost as well as if it had been called from C++.
To me, that looks like a high-cost non-abstraction.
Sure, but languages and the problems we solve with them are both multifaceted, so simply pointing to one tool and saying "this is better than the one you have in your toolbox" is fine, but the plural in "zero cost abstractions" kind of implies that most or all the tools are at parity or better.
It sounds like you're saying that you consider seeing this single instance of someone writing a library with a costly abstraction to be indicative of the entire language ecosystem not fitting the paradigm. This is kind of hard to take seriously; it's not like C++ doesn't have some costly abstractions as well way more embedded into the language itself (e.g. exceptions).
> someone writing a library with a costly abstraction
That's not what happened here.
> it's not like C++ doesn't have some costly abstractions
This is simultaneously both completely orthogonal to my observation that the Rust FFI is borked, and a great example of a problem that wouldn't happen in C++, because in C++ you could completely ignore the costly abstractions if necessary.
> > someone writing a library with a costly abstraction
> That's not what happened here.
Yes it is. Where do you think the `FFISafe` type that they used came from? It's not anything inherent to how Rust does FFI; it's a type someone wrote in an attempt to try to provide an abstraction, and that abstraction happened to have a cost. There's absolutely no reason anyone has to use it in order to do FFI in Rust.
Not related to the topic, but seeing this tidbit in the article took me by surprise;
> PSA: if you don’t see syntax highlighting, disable the 1Password extension.
This linked to the following discussion that's been ongoing for over a week (with acknowledgement from 1password that they're looking into it but as far as can tell no ETA on the fix or explanation for why it's happening in the first place):
I know that browser extensions are in general a pretty terrible ecosystem in terms of security, but am I off-base for thinking this is not a great look for 1password? Maybe I'm just underestimating how hard this is to solve generally, but my expectation would be that it shouldn't be that hard to keep track of which circumstances you're loading entire external scripts to manipulate the DOM and not do it accidentally in places you have no need for it.
Your assumption that rich people spend less for fun than poor people can afford to spend to survive is not something I think I'm confident enough in to trust it as the basis for policy like this.
Given how many people have been deported this year in violation of judicial orders, and how the secretary of the DHS earlier this year even testified to Congress that they thought "habeas corpus" was a right the president has to unilaterally deport people, I don't think it's unreasonable for people to be trying to get out in front of any potential unconstitutional deportations by arguing against them. If the fears turn out to be unfounded, that's a good thing.
In the article, there's a link in the text right before they put that parenthetical. I'm guessing they're saying that the link interested them so they clicked it to read but opened it in a new tab so they could finish the current article first.
100%. Types don’t replace fuzzing, property tests, chaos, or adversarial thinking. They just move one slice of bugs from runtime to compile time and make refactors safer.
In hindsight I should have positioned types/ADTs as one layer in the reliability toolbox, not the toolbox.
I've seen it pointed out that the main point of functional programming is immutability, and that the benefits mostly flow from that. I haven't really learned much of any lisp dialect, but my (admittedly fuzzy) general perception is that this is also the preferred way to work in them, so my guess is that's where the benefit in reliability might come from.
Correct. If things are mutable, then in most languages, there can be spooky action at a distance, that mutates some field of some other object or does so indirectly via some calls. This then can change how the thing behaves in other circumstances. This style of programming quickly becomes hard to fully grasp and leads to humans making many mistakes. Avoiding mutation therefore avoids these kinds of faults and mistakes.
reply