OK, so those are skeptics about tooling, not about memory safety per se.
And not even about tooling per se, since achieving safety on their own doesn't literally mean on their own; they are relying on tooling.
True Scotsman's "on your own" means working in assembly language, in which you have to carefully manage even just calling a function and returning from it: not leaving stray arguments on the stack, saving and restoring all callee-saved register that are used in the function and so on.
Someone who thinks that their job is not to have segfaults is pretty green, obviously.
These C++ developers I'm mentioning are all at least senior (some of them staff+), which makes their remarks on segfaults scary, because clearly, they haven't realized that a segfault is the best case scenario, not the worst one. This means that they very much need a refresher course on memory safety and why it matters.
The fact that they assume that they're good enough to avoid memory errors without tooling despite the fact that most of these errors are invisible and may remain invisible for years before being discovered is a second red flag, because it strongly suggest that they misunderstand the difficulty.
Of course, the conversation between you and I is complicated by the fact that the same words "memory safety" could apply to the programming language/tooling or to the compiled binary.
The whole discussion sucks, because Rust is not memory safe, and can easily be worse than C and C++ regarding memory safety. Memory unsafety is entirely possible in Rust[0].
> True Scotsman's "on your own" means working in assembly language, in which you have to carefully manage even just calling a function and returning from it: not leaving stray arguments on the stack, saving and restoring all callee-saved register that are used in the function and so on.
I've actually been trying to figure out if it's practical to write assembly but then use a proof assistant to guarantee that it maintains safety. Weirdly feels easier than C insofar as that you don't have (C's) undefined behavior to contend with. But formal methods are a whole thing and I'm very very new to them so it remains to be seen whether this is actually a good idea, or awful and stupid:) (Edit: ...or merely impractical, for that matter)
You should take a look at TAL (the Typed Assembly Language) and DTAL (the Dependently Typed Assembly Language). The latter was designed specifically to support co-writing assembly and formal proofs.
It would be impractical. It would be monumental to even specify what 'safe' means in that degree of freedom, let alone allow/reject arbitrary assembly code according to the spec.
E.g. Trying to reject string operations which write beyond the trailing \0. At assembly level, \0 is only one of many possible conventions for bounding a string. E.g. maybe free() is allowed to write past 0s. So you'd need to decide whether it's safe depending on context.
Well yeah, I assumed if I was doing formal verification that that would necessarily involve fully specifying/tracking data sizes, which would mean that all strings would be pascal strings (at least internally; obviously it might be necessary to tack on a \0 for compatibility with external code).
You could do that and some very expensive aerospace tooling exists in that direction (for verifying compiler outputs), but an easier way is to use sanitizers on a best effort basis. You can either use something like AFL++ with QAsan or use retrowrite to insert whatever you want into the binary as if it was a normal compiled program.
And not even about tooling per se, since achieving safety on their own doesn't literally mean on their own; they are relying on tooling.
True Scotsman's "on your own" means working in assembly language, in which you have to carefully manage even just calling a function and returning from it: not leaving stray arguments on the stack, saving and restoring all callee-saved register that are used in the function and so on.
Someone who thinks that their job is not to have segfaults is pretty green, obviously.