Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well that's just like, your opinion, man.

In all seriousness, C and C++ are with us forever. Improving and moving these languages forward, and improving their safety with static and runtime analysis (and hardware support) has to be part of the plan moving forward.



To me, that's a very bleak outlook, and as I already mentioned, retrofitting C++ will never solve all of these issues; retrofits are always additive, and simply increase the complexity of the compiler and specification to the point that very few people actually understand the majority of C++'s semantics.

Retrofitting also doesn't solve the philosophy of a language. There are simply aspects of C++ that are too vital to its identity to change. Is object oriented programming the final word on abstraction in programming? Probably not. This is a young field, and there are always better solutions lurking around the corner. When we buy into the notion that something should live forever, we rob ourselves of the opportunity to move forward, or to at least know with certainty whether something is truly the best.

Unless we want to repeat what has happened with COBOL, where the systems have lasted forever to the point that all of the COBOL programmers are dead or retired, we need to start evolving our philosophy to favor language replace-ability, or stop guaranteeing backwards compatibility. The latter is untenable to most businesses, while the former can be achieved through the use of small services, FFIs, RPC, and system modularity.


C++ does remove deprecated features, but so far there has been no reason to deprecate OOP. The fundamentalist viewpoint you're espousing is not particularly convincing; in particular, the amount of labor required to kill and replace working systems for aesthetic reasons (as opposed to evolving them) would remove much of the free time required to create new and innovative technology.


Which is why I also said that we need to evolve our thinking towards replaceable systems. As systems grow, the pace of innovation slows as it becomes cumbersome to make changes, and the barrier to entry into the code base rises. With that in mind, we are well served to replace programming languages as a means of maintaining constant velocity in our development.

I don't claim that existing systems are easy to replace, but I do posit that they can be made replaceable given that certain practices are adopted, and the mindset of the developers is that they system should be easy to remove and replace.


What languages in particular are you suggesting we move towards? Rust is getting there, but it's still evolving rapidly with features. That's about it. Any HN-favorite, such as Go, is not a valid replacement.


I'm personally behind Rust and Go, but I don't want that conflated with the point I'm trying to make about facilitating code replace-ability. We shouldn't move towards any language in particular, but instead facilitate interoperability of languages, focus on building services instead of libraries, and size services such that they can be easily replaced. A business doesn't need to use a different language per developer, but they should expect to phase out languages periodically, and accept more rewrites and make more consideration about the right tool for the job when greenfielding that jumping to some mandated language like Java.

I also wouldn't say that Go is invalid, Go is more than production ready; it's actually in production in critical infrastructure today at scale.


Go is not valid because of performance reasons, mostly. I work in embedded, and besides the client apps, c/c++ dominate in the performance space. I think it can, and has replaced c++ in other areas, but it still has a long way to go. The C interoperability was extremely limited last I used it, and that's a big way to get people to move.


Even if every single C++ programmer stopped using virtual functions today (which is never going to happen), the OOP subset of C++ is not going to be removed.


My stance is that "the majority of C++'s semantics" can also evolve with time, and one of C++'s the core design philosophy is to make core as minimal as possible so that the language can be evolved by how people use it rather than how core language specification changes. Examples include robust implementations of Entity Component System, user-defined function classes and expression templates developed prior to C++11.

Moreover, COBOL story just reminds me that there are still many cases that code should be maintained forever, and there is no unicorns to solve all problems like a magic. RPC introduces performance and mental overhead. FFI is limited by performance/vague boundaries. Even in backend development, there are lots of companies switching back to monolithic application after trying microservices.


I' don't think that we should force C++ to be something it's not (which has been a monumental task), and C++ will never remove its shackle to C compatibility. Moreover, languages are also defined by what they do not have, and this is very different from the current mindset of fixing C++ by adding "missing" features. Languages like Rust or Go that are designed from the ground up for a focused purpose look and feel starkly different from a garden like C++.

It's also unfair to dismiss RPC or FFI, which have come a considerably long distance, and in many cases add no noticeable overhead. Today's networks are now approaching 400G speeds, and Linux has added considerably more interfaces such as eBGP bytecode that rewards a cross language mindset with better performance.


I totally agree on your points in the first paragraph, but sometimes the boundary between suitable and unsuitable tasks is quite vague, and cross-language development introduces new complexity. I mean if something is 2x in other languages, I will simply use it, but if it's only 1.2x then I will stick to current solution.

The problem of RPC is latency & increased complexity, not throughput. It's still hard to achieve native performance when passing objects between FFI boundary to avoid a copy, especially when using stub generators.


Choosing a language is a difficult task, I agree. It is often qualitative improvements that are used to argue for new languages, which most engineers are skeptical of, as they tend to favor quantitative needle moving. So it's difficult to determine if something is 2x.

I choose languages through a series of criteria. First, based on my principles, which limits me to languages were safety, efficiency, and expressiveness are highly valued. I find that most languages have some core principles listed on their website. Then I take my software's requirements, and find a language in my subset of choices that is a best fit. If I am on a team, then we have to come to a consensus on the teams shared principles, which I find is very useful outside of choosing a language.

I think that services can be sized sufficiently large to be replaceable and still encapsulate most critical code paths such that sharing memory should be a rare requirement. Facebook and google are both examples of companies operating at hyper scale based on RPC service architecture. For those times were it is a requirement, I can't argue that RPC is a good choice. C is a great lingua franca, and most languages support its ABI (at least the ones used for critical performance).


Then we are basically on the same side. It's just our principles that vary. Security is ensured with external tools (e.g., SQL sanitizer/checker/minimum privilege), and AWS's 4TB instance exists for a reason - graph data is hard to scale. Sometimes even passing over sharing memory is too slow for us and we rolled out copy-free allocators which directly allocate objects on that memory region.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: