The amazing part about examples like that is people read them, check that the compiler really does work on that basis, and then continue writing things in C++ anyway. Wild.
Suppose I should expand on this. The idea seems to be either 1/disbelief - compilers wouldn't really do this or 2/ infallibility - my code contains no UB.
Neither of those positions bears up well under reality. Programming C++ is working with an adversary that will make your code faster wherever it can, regardless of whether you like the resulting behaviour of the binary.
I suspect rust has inherited this perspective in the compiler and guards against it with more aggressive semantic checks in the front end.
>The amazing part about examples like that is people read them, check that the compiler really does work on that basis, and then continue writing things in C++ anyway. Wild.
Well, in modern C++ this code would look like this:
I actually would prefer to get the second output. The result is wrong, but consistantly and deterministically so. The naive implementation of the broken code is a heisenbug. Sometimes it will work, and sometimes it won't, and any attempt to debug it would likely perterb the system enough to make the issue not surface.
It wouldn't suprise me if I have run into the latter situation without relizing it. When I got the the problem, I would have just (incorrectly) assumed that the memory right after the array happened to have the relevent value. I would be counting my blessings that it happened consistantly enough to be debuggable.
I agree that it is better to get deterministic and predictable behavior.
Reminds me of when for a while, I worked on HP 9000s under HP-UX and in parallel on an Intel 80486-based Linux box, and what I noticed is that the Unix workstations crashed sooner and more predictably with segmentation faults than Linux on the PC (not sure if this has changed since the early 1990s - probably had to do with the MMU); so developing on HP under Unix and then finally compiling under Linux led to better code quality.
> The amazing part about examples like that is people read them, check that the compiler really does work on that basis, and then continue writing things in C++ anyway.
That isn't idiomatic C++ and hasn't been for a long time. Sure, it's possible to do it retro C-style, because backward compatibility, but you generally don't see that in a modern code base.
The modern codebase has grown from a legacy one. The legacy one with parts of the codebase that were C, then got partially turned into object oriented C++, then partially turned into template abstractions. The parts least likely to have comprehensive test coverage. That place is indeed where a compiler upgrade is most likely to change the behaviour of your application.
Remind me, how is this a good thing again? Especially considering that (if you write modern C++) the compiler should optimize away bound checks most of the time (and in all critical places) either way.
> the compiler should optimize away bound checks most of the time (and in all critical places)
Unfortunately, this is true much rarer than you might think. In order to optimize it away, a compiler has to prove that the bounds check result is always true and that happens surprisingly not all the time to say the least. When it can't optimize it away, bounds checking will slow down the code significantly, especially in tight loops. And that slowdown will be very hard to debug unless you know exactly where to look for, - you'll basically assume that "that's how it works at the highest speed and it can't be improved (a.k.a. "buy a better hardware!")
Second, with a proper programming hygiene, in many cases bounds checking are just redundant. There are at least 2 methods for direct iteration over a vector, that doesn't require it: ranged `for (auto& e: vector){}` and by utilizing iterators. There are also `<algorithms>` library with implementation of many useful container iteration functions that require you at most to only specify a functor that do some operation on vector element.
And third - if you think that you really must have bounds checking, it's about as trivial to implement as:
template<typename T, typename A>
class vectorbc : public ::std::vector<T,A>{
public:
using std::vector::vector;
T& operator[](std::size_t idx){
return this->at(idx);
}
}
It's just as "amazing" to read these takes from techno purists. You use software written in C++ daily, and it can be a pragmatic choice regardless of your sensibilities.
When any Costco sells a desktop ten thousand times faster than the one I started on, we can afford runtime sanity checks. We don’t have to keep living like this, with stacks that randomly explode.
I don't know what line of work you're in, but I use a desktop orders of magnitude faster than my first computer also, and image processing, compilation, rendering, and plenty of other tasks aren't suddenly thousands of times faster. Not to mention that memory safety is just one type of failure in a cornucopia of potential logical bugs. In addition, I like core dumps because the failure is immediate, obvious, and fatal. Finally, stacks don't "randomly explode." You can overflow a stack in other languages also, I really just don't see what you're getting at.
> Not to mention that memory safety is just one type of failure in a cornucopia of potential logical bugs.
You can die of multiple illnesses so there's no point in developing treatment for any particular one of them.
> I like core dumps because the failure is immediate, obvious, and fatal.
Core dumps provide a terrible debugging experience, as the failure root cause is often disjoint from the dump itself. Not to mention that core dumps are but one outcome of memory errors, with other funnier outcomes such as data corruption and exploitable vulnerabilities as likely.
Lastly, memory safe languages throw an exception or panic on out of bound access, which can be made as immediate and fatal as core dumps. And much more obvious to debug, since you can trust that the cause starts indeed at the point of failure
I don’t mean a call stack, I mean “stack” in the LAMP sense—the kernel, drivers, shared libraries, datastores, applications, and display servers we try to depend on.
I dunno, my computers seems to keep running slower and slower despite being faster and faster. I blame programmers increasingly using languages with more and more guardrails which are slower. I'd rather have a few core dumps and my fast computer back.
Definitely. There's loads of value delivered by C++ implementations, including implementations of C++ and other languages. The language design of speed over safety mostly imposes a cost in developer / debugging time and fear of upgrading the compiler toolchain. Occasionally it shows up in real world disasters.
I think we've got the balance wrong, partly because some engineering considerations derive directly from separate compilation. ODR no diagnostic required doesn't have to be a thing any more.
Lots of things 'aren't Rust'. In fact almost everything isn't Rust. For now. That may change in due course but right now I would guestimate the amount of Rust code running on my daily drivers to pretty close to zero%. The bulk is C or C++.
Someone else posted statistics that show Firefox being 10% Rust, but I'm not sure it makes sense to include HTML and Python and JavaScript in the comparison. If you compare Rust against C/C++, it's 20%
"I'd just like to interject for a moment. What you're refering to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX."
Isn't this a tad pedantic? You obviously understood what I was saying.
>and that you omit external libraries from the tally.
Mozilla vendors their dependencies. They're counted.
> check that the compiler really does work on that basis, and then continue writing things in C++ anyway. Wild.
My compiler (MSVC) doesn't do that[0]. Clang also doesn't do this[1]. It's wild to me that GCC does this optimization[2]. It's very subtle, but Raymond Chen and OP both say a compiler can create this optimization, not that it will.
Well, the argument brought up is that users want it this way, so this is existing practice which is implemented and should be standardized. So please complain and file bugs.
Also use the compiler and language feature that help, such as variably-modified types such as bare pointers, attributes, compiler flags, etc.
Suppose I should expand on this. The idea seems to be either 1/disbelief - compilers wouldn't really do this or 2/ infallibility - my code contains no UB.
Neither of those positions bears up well under reality. Programming C++ is working with an adversary that will make your code faster wherever it can, regardless of whether you like the resulting behaviour of the binary.
I suspect rust has inherited this perspective in the compiler and guards against it with more aggressive semantic checks in the front end.