> the C-style approach is to extract a minimal subset of the dependency and tightly review it and integrate it into your code. The Rust/Python approach is to use cargo/pip and treat the dependency as a black box outside your project.
The Rust approach is to split-off a minimal subset of functionality from your project onto an independent sub-crate, which can then be depended on and audited independently from the larger project. You don't need to get all of ripgrep[1] in order to get access to its engine[2] (which is further disentangled for more granular use).
Beyond the specifics of how you acquire and keep that code you depend on up to date (including checking for CVEs), the work to check the code from your dependencies is roughly the same and scales with the size of the code. More, smaller dependencies vs one large dependency makes no difference if the aggregate of the former is roughly the size of the monolith. And if you're splitting off code from a monolith, you're running the risk of using it in a way that it was never designed to work (for example, maybe it relies on invariants maintained by other parts of the library).
In my opinion, more, smaller dependencies managed by a system capable of keeping track of the specific version of code you depend on, which structured data that allows you to perform checks on all your dependencies at once in an automated way is a much better engineering practice than "copy some code from some project". Vendoring is anathema to proper security practices (unless you have other mechanisms to deal with the vendoring, at which point you have a package manager by another name).
If you're not writing very low level stuff and don't need to squeeze out every last bit of performance, Rust code can be very simple and easy to understand as well.
Instead of disabling the borrow checker what should be possible is to promote borrows to Rc/Arc as needed. I would want to restrict this mode to one where it can only work locally, never publishable to crates.io. It would be particularly useful when running tests, then instead of a compile error you can also get a runtime error with better information about the cases the borrow checker was actually encountering.
It's a similar thing to watching movies from before the mid-2000 (I place the inflection point around Collateral in 2004) where after that you get overly dark scenes where you can't make out anything, while anything earlier you get these night scenes where you can clearly make out the setting, and the focused actors/props are clearly visible.
Watch An American Werewolf in London, Strange Days, True Lies, Blade Runner, or any other movie from the film era all up to the start of digital, and you can see that the sets are incredibly well lit. On film they couldn't afford to reshoot and didn't have immediate view of what everything in the frame resulted on, so they had to be conservative. They didn't have per-pixel brightness manipulation (feathering and burning were film techniques that could technically have been applied per frame, but good luck with doing that at any reasonable expense or amount of time). They didn't have hyper-fast color film-stock they could use (ISO 800 was about the fastest you could get), and it was a clear downgrade from anything slower.
The advent of digital film-making when sensors reached ISO 1600/3200 with reasonable image quality is when the allure of time/cost savings of not lighting heavily for every scene showed its ugly head, and by the 2020's you get the "Netflix look" from studios optimizing for "the cheapest possible thing we can get out the door" (the most expensive thing in any production is filming in location, a producer will want to squeeze every minute of that away, with the smallest crew they could get away with).
Yes that’s the correct decision when those are the only options, like if a car has stalled or the driver just got out and ran away.
In this case there’s a third option: the computer that’s still perfectly functional should have been able to get out of the way on its own. And legally all drivers are required to.
I assume that applies to robots as well, if it doesn’t it absolutely should.
> Any large Rust project I check has tons of unsafe in its dependency tree.
This is an argument against encapsulation. All Rust code eventually executes `unsafe` code, because all Rust code eventually interacts with hardware/OS/C-libraries. This is true of all languages. `unsafe` is part of the point of Rust.
Having a 3D engine does not a AAA make. The Witness is a beautiful looking game, but the amount of state and interactions it has to deal with is orders of magnitude less than GTA: San Andreas. It is closer to the complexity a Myst remake would have.
This reads to me like an argument for better refactoring tools, not necessarily for looser type systems. Those tools could range from mass editing tools, IDEs changing signatures in definitions when changing the callers and vice versa, to compiler modes where the language rules are relaxed.
I was thinking about C++ and if you change your mind about whether some member function or parameter should be const, it can be quite the pain to manually refactor. And good refactoring tools can make this go away. Maybe they already have, I haven’t programmed C++ for several years.
If you want something more conservative for error reporting, annotate-snippets is finally at parity with rustc's current custom renderer and will soon become the default for both rustc and cargo.
The Rust approach is to split-off a minimal subset of functionality from your project onto an independent sub-crate, which can then be depended on and audited independently from the larger project. You don't need to get all of ripgrep[1] in order to get access to its engine[2] (which is further disentangled for more granular use).
Beyond the specifics of how you acquire and keep that code you depend on up to date (including checking for CVEs), the work to check the code from your dependencies is roughly the same and scales with the size of the code. More, smaller dependencies vs one large dependency makes no difference if the aggregate of the former is roughly the size of the monolith. And if you're splitting off code from a monolith, you're running the risk of using it in a way that it was never designed to work (for example, maybe it relies on invariants maintained by other parts of the library).
In my opinion, more, smaller dependencies managed by a system capable of keeping track of the specific version of code you depend on, which structured data that allows you to perform checks on all your dependencies at once in an automated way is a much better engineering practice than "copy some code from some project". Vendoring is anathema to proper security practices (unless you have other mechanisms to deal with the vendoring, at which point you have a package manager by another name).
[1]: https://crates.io/crates/ripgrep
[2]: https://crates.io/crates/grep/
reply