> and I end up having all these typedefs in my projects
I avoid doing this now. It's more trouble than it's worth and it changes your code from a standard dialect of C into a custom one. Plus my eyes are old and they don't enjoy separating short identifiers.
> typedef struct { ... } String
I avoid doing this. Just use `struct string { ... };'. It makes it clear what you're handling. C23 finally gave us "auto", you shouldn't fret over typedefing everything anymore. I also prefer a "strbuf" type with an index and capacity so I can safely read and write to it with a derived "strview" having pointer and length only which references into the buffer.
> returning results
The general method of returning structures larger than two machine words is fairly inefficient. Plus you're cutting yourself off from another C23 gem which was [[nodiscard]]. If you want the 'ok' value checked then you can _really_ specify that. Put everything else behind a pointer passed in an argument. The sum type logic works just as well there.
> I tend to avoid the string.h functions most of the time, only employing the mem family when I want to, well, mess with memory.
So you use strlen() a lot and don't have to deal with multibyte characters anywhere in your code. It's not much of a strategy.
> So you use strlen() a lot and don't have to deal with multibyte characters anywhere in your code. It's not much of a strategy.
You don't need to support all multibyte encodings (i.e. DBCS, UCS-2, UCS-4, UTF-16 or UTF-32) characters if you're able to normalise all input to UTF-8.
I think, when you are building a system, restricting all (human language) input to be UTF-8 is a fair and reasonable design decision, and then you can use strlen to your hearts content.
Am I missing something here? UTF8 has multibyte characters, they're just spread across multiple bytes.
When you strlen() a UTF8 string, you don't get the length of the string, but instead the size in bytes.
Same with indices. If you Index at [1] in a string with a flag emoji, you don't get a valid UTF8 code point, but instead some part of the flag emoji. This applies with any UTF8 code points larger than 1 byte, which there are a lot of.
> When you strlen() a UTF8 string, you don't get the length of the string, but instead the size in bytes.
Exactly, and that's what you want/need anyway most of the time (most importantly when allocating space for the string or checking if it fits into a buffer).
If you want the number of "characters" (which can have two meanings: either a single UNICODE code point, or a grapheme cluster (e.g. a "visible character" that's composed from multiple UNICODE code points). For this stuff you need a proper UNICODE/grapheme-aware string processing library. But this is needed only rarely in most application types which just pass strings around or occasionally need to split/parse/tokenize by 7-bit ASCII delimiters.
Turns out that I rarely need to know sizes or indices of a UTF8 string in anything other than bytes.
If I write a parser for instance, usually, what to know is "what is the sequence of byte between this sequence of bytes and that sequence of bytes". That there are flag emojis or whatever in there don't matter, and the way UTF8 works ensures that a character representation doesn't partially overlap with a another.
What the byte sequences mean only really matters if you are writing an editor, so that you know how many bytes to remove when you press backspace for instance.
Truncation as to prevent buffer overflow seems to be a case where it would matter but not really. An overflow is an error and should be treated as such. Truncation is a safety mechanism, for when having your string truncated is a lesser evil. At that point, having half a flag emoji doesn't really matter.
> When you strlen() a UTF8 string, you don't get the length of the string, but instead the size in bytes.
Yes, and?
> What am I missing?
A use-case? Where, in your C code, is it reasonable to get the number of multibyte characters instead of the number of bytes in the string?
What are you going to use "number of unicode codepoints" for?
Any usage that amounts to "I need the number of unicode codepoints in this string" is coupled to handling the display of glyphs within your program, in which case you'd be using a library for that anyway because graphics is not part of C (or C++) anyway.
If you're simply printing it out, storing it, comparing it, searching it, etc, how would having the number of unicode codepoints help? What would it get used for?
Indeed. If you have output considerations then the number of Unicode codepoints isn't what you wanted anyway, you care about how many output glyphs there will be, that codepoint might result in zero glyphs, it might modify an adjacent glyph, or it might be best rendered as multiple glyphs.
If you're doing some sort of searching you want a normalization and probably pre-processing step, but again you won't care about trying to count Unicode code points.
> For example splitting, cutting and inserting strings into each other
That's not going to work without a glyph-aware library anyway; even if you are working with actual codepoint arrays, you can't simply insert a codepoint into that array and have a correct unicode string as the result.
I do not agree that restricting it to UTF-8 (or to Unicode in general) is a fair and reasonable design decision (although UTF-8 may be reasonable if Unicode is somehow required anyways (you should avoid requiring Unicode if you can though), especially the program is also expected to deal with ASCII in addition to requiring Unicode), but regardless of that, the number of code points is not usually relevant (and substring operations indexed by code points is not usually necessary either), and the number of bytes will be more important, and some programs should not need to know about the character encoding at all (or only have a limited consideration of what they do with them).
(One reason you might care about the number of code points is because you are converting UTF-8 to UTF-32 (or Shift-JIS to TRON-32 or whatever else) and you want to allocate the memory ahead of time. The number of characters (which is not the same as the number of code points in the case of Unicode, although for other character sets it might be) is probably not important; if you want to display it, you will care about the display width according to the font, and if you are doing editing then where one character starts and ends is going to be more significant than how many characters they are. If you are using and indexing by the number of code points a lot (even though as I say that should not usually be necessary), then you might use UTF-32 instead of UTF-8.)
(It is also my opinion that Unicode is not a good character set.)
> I think, when you are building a system, restricting all (human language) input to be UTF-8 is a fair and reasonable design decision, and then you can use strlen to your hearts content.
It makes no sense. If you only need the byte count then you can use strlen no matter what the encoding is. If you need any other kind of counting then you don't use strlen no matter what the encoding is (except in ASCII only environment).
"Whether I should use strlen or not" is a completely independent question to "whether my input is all UTF-8."
> I avoid doing this. Just use `struct string { ... };'. It makes it clear what you're handling.
Well then imagine if Gtk made you write `struct GtkLabel`, etc. and you saw hundreds of `struct` on the screen taking up space in heavy UI code. Sometimes abstractions are worthwhile.
People getting hung up on `_t` usage being reserved for posix need to lighten up. I doubt they'll clash with my definitions and if does happen in the future, I'll change the typedef name.
> Well then imagine if Gtk made you write `struct GtkLabel`, etc. and you saw hundreds of `struct` on the screen taking up space in heavy UI code. Sometimes abstractions are worthwhile.
TBH, in that case the GtkLabel (and, indeed, the entire widget hierarchy) should be opaque pointers anyway.
If you're not using a struct as an abstraction, then don't typedef it. If you are, then hide the damn fields.
Thank you! Because I wanted to point exactly that. When I was very junior programmer, and coded alone, I used to have “that elemental header” where lots of things were inside. Many of them to convert C in what I wished it was.
Now I think is between no good idea, and absolutely awful.
Yes, sometimes you wish some thing were different in a programming language “if only these types had shorter names”. But when you work in a team, first you should have consensus, and then modifying the language becomes a heavy load, that every new person in the project will have to lift.
“Modifying C is porting the Lisp curse to C” is my motto. Use all as standard, vanilla as possible.
I had a coworker who had a very complicated set of "includes" that their code relied upon—not unlike the typedefs in the post. So his code was difficult to move around without also moving all his headers with it.
I try to minimize dependencies (custom headers, custom macros, etc.).
> I’ve long been employing the length+data string struct. If there was one thing I could go back and time to change about the C language, it would be removal of the null-terminated string.
It's not necessary to go back in time. I proposed a way to do it in modern C - no existing code would break:
> the fatal error was not combining the array dimension with the array pointer; all it needs is a little new syntax a[...]; this won’t fix any existing code. Over time, the syntax a[] can be deprecated by convention and by compilers.
You're thinking in decades. C standard committee is slower than that. This could have worked in practice, but probably never will happen in practice. Maybe people should start considering a language like D[1] as an alternative, which seems to have the spirit of both C and Go, but with much more pragmatism than either.
It might be the language he is looking for, but it might not, and more likely than not is not. D is one of those odd languages which most likely ought to have gotten a lot more popular than it did, but for one reason or another, never quite caught on. Perhaps one reason is because it lacks a sense of eccentricity and novelty that other languages in its weight class have. Or perhaps it's just too unfamiliar in all the wrong ways. Whatever the case may be, popularity is in fact one of the most useful metrics when ruling out a potential language for a new project. And if D does not meet GP's requirements in terms of longevity or commercial support, I would certainly not suggest GP adopt it too eagerly, simply because it happens to check off most or all their technological requirements.
Smoe of these are definitely nice-to-haves*, but when you're evaluating a C++ alternative, there are higher priority features to research first.
How are the build times? What does its package system(s) look like, and how populated are they? What are all its memory management options? How does it do error handling and what does that look like in real world code? Does it have any memory safety features, and what are their devtime/comptime/runtime costs? Does it let me participate in compile time optimizations or computations?
Don't get me wrong, we're on the same page about wanting to find a language that fills the C++ niche, even if it will never be as ideal as C++ in some areas (since C++ is significantly worse in other areas, so it's a fair trade off). But just like dating, I'm imagining the fights I'll have with the compiler 3 months into a full time project, not the benefits I'll get in the first 3 days.
* (a) I've been using structs without typedef without issue lately, which has its own benefits such as clarifying whether the type is simple or aggregate in param lists, while auto removes the noise in function bodies. (b) Not needing forward declarations is convenient, but afaik it can't not increase compile times at least somewhat. (c) I like the consistency here, but that's merely a principle; I don't see any practical benefit.
You can use exceptions or returns for error handling.
The biggest memory safety feature it has is length-delimited arrays. No more array overflows! The cost of it is the same as in std::vector when you do the bounds checked option. D also uses refs, relegating pointers to unusual uses. I don't know what you mean by "participating in optimizations".
(a) C doesn't have the hack that C++ has regarding the tag names. D has auto.
(b) D has much faster compile times than C++.
(c) The practical benefit is the language is much easier to master.
that doesn't make sense, because why would you make something opaque and expose it immediately again in the same line?
The others are ... different. I can't tell whether they are really better. The second maybe, although I like it that the compiler forces me to forward type stuff, it makes the code much more readable. But then again I don't really get the benefit of
import foo;
vs
#include <foo>
.
include vs import is no difference. # vs nothing makes it clear that it is a separate feature instead of just a language keyword. < vs " make it clear whether you use your own stuff or stuff from the system. What do you do when your file contains spaces? Does import foo bar; work for including a file a single file, named "foo bar"?
It's inelegant because without the typedef, you need to prefix it always with `struct`. This is inelegant because all other types do not need a prefix. It also makes it clumsier to refactor the code (adding or subtracting the leading `struct`). The typedef workaround is extremely commonplace.
> I like it that the compiler forces me to forward type stuff, it makes the code much more readable
That means when opening a file, you see the first part of the file first. In C, then you see a list of forward references. This isn't what you want to see - you want to see first the public interface, not the implementation details. (This is called "above the fold", coming from what you see in a folded stack of newspapers for sale. The headlines are not hidden below the fold or in the back pages.) In C, the effect of the forward reference problem is that people tend to organize the code backwards, with the private leaf functions first and the public functions last.
> include vs import is no difference
Oh, there is a looong list of kludgy problems stemming from a separate macro processor that is a completely distinct language from C. Even the expressions in a macro follow different rules than in C. If you've ever used a language with modules, you'll never want to go back to #include!
> What do you do when your file contains spaces?
A very good question! The module names must match the filename, and so D filenames must conform to D's idea of what an identifier is. It sounds like a limitation, but in practice, why would one want a module name different from its filename? I can't recall anyone having a problem with it. BTW, you can write:
import core.stdc.stdio;
and it will look up `core/stdc/stdio.d` (Linux, etc.) or `core\stdc\stdio.d` on Windows.
We obviously disagree with the coding organization we prefer, so I find that rather elegant, but this doesn't sound like a substantial discussion. You as the language author are obviously quite content with the choices D made.
> This is inelegant because all other types do not need a prefix.
I don't find that. It makes it rather possible to clearly distinguish between transparent and opaque types. That these are a separate namespace makes it also possible to use the same identifier for the type and object, which is not always a good choice, but sometimes when there really is no point in inventing pointless names for one of the two, it really is. (So I can write struct message message; .) It also makes it really easy to create ad-hoc types, which honestly is my killer feature that convinced me to switch to C. I think this is the most elegant way to make creating new types for single use, short of getting rid of explicit types altogether.
> It also makes it clumsier to refactor the code (adding or subtracting the leading `struct`).
I never had that problem, and don't know when it occurs and why.
> The typedef workaround is extremely commonplace.
In my opinion that is not a workaround, but a feature. I also use typedefs when I want to declare an opaque type. This means that in the header file all function declarations refer to the opaque type, and in the implementation the type is only used with "struct". This also makes it obvious which types internals you are supposed to touch and which not. (This is also what e.g. the Linux style guide recommends.)
> This isn't what you want to see - you want to see first the public interface, not the implementation details.
Maybe you, but I don't. As in C public interface and implementation are split into different files, this problem doesn't occur. When I want to see the interface, I'm going to read the interface definition. When I look into the implementation file, I definitely don't expect to read the interface. What I rather see is first the dependencies (includes) and then the internal types. This fits "Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." . Then I typically see default values and configuration. Afterwards yes, I see the lowest methods.
> people tend to organize the code backwards, with the private leaf functions first and the public functions last.
Which results in a consistent organization. It also fits how you would write in in math or an academic context, that you only use what is already defined. It makes the file readable from top to bottom. When you are just looking for a specific thing, instead of trying to read it in full, you are searching and jumping around anyway.
> Oh, there is a looong list of kludgy problems stemming from a separate macro processor that is a completely distinct language from C. Even the expressions in a macro follow different rules than in C. If you've ever used a language with modules, you'll never want to go back to #include!
A macro language is surprising for the newcomer, but you get used to it, and I don't think there is a problem with include. Textual inclusion is kind of the easiest mental modal you can have and is easy to control and verify. Coming from a language with modules, before learning C, I never found that to be an issue, and rather find the emphasis on the bare filesystem rather refreshing.
> but in practice, why would one want a module name different from its filename?
True, I actually never wanted to include a file with spaces, but it is something where your concept breaks. Also you can write #include "foo/bar/../baz" just fine, and can even use absolute paths, if you feel like it.
> macro language is surprising for the newcomer, but you get used to it
This was one of the biggest paradigm shifts for me in mastering C. Once I learned to stop treating the preprocessor as a hacky afterthought, and realized that it's actually a first-class citizen in C and has been since its conception, I realized how beautiful and useful it really is when used the way the designers intended. You can do anything with it, literally anything, from reflection to JSON and YAML de/serialization to ad hoc generics. It's so harmonious, if unsightly, like the fat lady with far too much makeup singing the final opus.
D accomplishes this by using Compile Time Function Execution to build D source code from strings, and then inline compiling the D code. Learning a macro language is unnecessary, as it's just more D code.
> You can do anything with it, literally anything, from reflection to JSON and YAML de/serialization to ad hoc generics.
Wow. Do you have any pointers? I always thought random computation with it is hard, because it doesn't really wants to do recursion by design. Or are you talking about using another program as the preprocessor?
Yes, because I also don't know what this is supposed to mean? The product of two addresses? Dereferencing one pointer, and then combining them without an operator? And what's the type going to be, "pointer squared"?
Also what has this to do with the current discussion?
> Also what has this to do with the current discussion?
The point is C does not allow doing anything you want. The C type system, for example, places all kinds of restrictions on what code can be written. The underlying CPU does not have a type system - it will multiply two pointers just fine without complaint. The CPU does not even have a concept of a pointer. (The C preprocessor doesn't have a notion of types, either.)
The point of a type system is to make the code more readable and reduce user errors.
We have a difference of opinion on C. Mine is that C should have better rules to make code more readable and reduce user errors. Instead it remains stuck in a design from the 1970s, and has compromised semantics that result from the severe memory constraints of those days. You've defended a number of these shortcomings as being advantages.
Just for fun, I'll throw out another one. The C cast syntax is ambiguous:
(T)(3)
Is that a function call or a cast of 3 to type T? The only way to disambiguate is to keep a symbol table of typedef's so one can determine if T is a type or not a type. This adds significant complexity to the parser, and is completely unnecessary.
The fix D has for this is:
cast(T)(3)
where `cast` is a keyword. This has another advantage in that casts are a blunt tool and are associated with hiding buggy code. Having `cast` be easily searchable makes for better code reviews.
> The point is C does not allow doing anything you want.
I thought we were discussing specific issues, I did not claim, that C doesn't have things that could be different. For example the interaction of integer promotion and fixed size types is completely broken (as in you can't write correct portable code) in my opinion.
> The C type system, for example, places all kinds of restrictions on what code can be written. The underlying CPU does not have a type system - it will multiply two pointers just fine without complaint. The CPU does not even have a concept of a pointer.
As you wrote a pointer is not an address. The CPU lets you multiply addresses, but C also let's you multiply addresses just fine. The type for that is uintptr_t. Pointers are not addresses, e.g. ptr++ does not in general increment the address by one.
> The C preprocessor doesn't have a notion of types, either.
It doesn't even have a concept of symbols and identifiers, which makes it possible for you to construct these.
> You've defended a number of these shortcomings as being advantages.
Because I think they are. It's not necessarily the reason why they are there, but they can be repurposed for useful stuff and often are. Also resource constraints often result in a better product.
I still only declare variables at the begin of a new block, not because I wouldn't write C99+, I do, but because it makes the code easier to read when you can reason about the participating variables up front. I can still introduce a variable when I feel like, just by starting a new block. This enables me to also decide when the variables go out of scope again, so my variables only exist for the time, I really want them to, even if that is only for 3 lines.
> Just for fun, I'll throw out another one.
That's just a minor problem in compiler implementation, and doesn't result in problems for the user. Using the same symbol for pointer dereference and multiplication is also similar.
(a) *b
Is that a cast or a multiplication? These make for funny language quizzes, but are of rare practical relevance. Real world compilers don't completely split syntactic and semantic parsing anyway, so they can emit better diagnostics and keep parsing upon an error.
> You've defended a number of these shortcomings as being advantages.
My initial comment was about a shortcoming, which doesn't actually exist.
C++ does not allow forward references outside of structs. The point-of-instantiation and point-of-declaration rules for templates produces all kinds of subtle problems. D does not have that issue.
Yes, you absolutely can get the job done with C and C++. But neither is an elegant language, and that puts a cognitive drag on writing and understanding code.
I'm sorry, is this an in-joke or satire or something? I can't tell really. Maybe a woosh moment, and as others have said, the GP/person you are speaking about, Walter Bright, is the creator of the D language. Maybe you didn't read your parent's post? Not saying its intentional, but it almost seems rude to keep speaking in that way about someone present in the conversation.
Meanwhile after UNIX was done at AT&T, the C language authors hardly cared for the C standard committee in regards to the C compiler supported features used in Plan 9 and Inferno, being only "mostly" compatible, followed up having a authoring role in Alef, Limbo and Go.
> The language accepted by the compilers is the core ANSI C language with some modest extensions, a greatly simplified preprocessor, a smaller library that includes system calls and related facilities, and a completely different structure for include files.
> Meanwhile after UNIX was done at AT&T, the C language authors hardly cared for the C standard committee in regards to the C compiler supported features used in Plan 9 and Inferno, being only "mostly" compatible, followed up having a authoring role in Alef, Limbo and Go.
> I doubt most C advocates ever reflect on this.
What would be the conclusion of this reflection? Assuming you have reflected on this, what was your conclusion?
That the language authors concluded C was done, there was no point collaborating with WG14, and there were better tools to do their operating systems research on.
> there were better tools to do their operating systems research on.
I think that's the key, Ritchie, Thompson, Pike were interested in OS research while people that love C today just want a simple and powerful language with manual memory management. It is not the first time in history when the creation has a separate life from the creator's wishes.
The C committee is not afraid to add new syntax. And this is an easy addition.
Not only does it deliver a massive safety improvement, it dramatically speeds up strlen, strcmp, strcpy, strcat, etc. And you can pick out a substring without needing to allocate/copy. It's easy money.
As I see it, the problem with languages trying to replace C is that they not only try to fix fundamental flaws, but feel compelled to add unneeded features and break C's simplicity.
C is a simple language, but that simplicity leads to non-portable code and lots of klunky, ugly things like using the preprocessor as a substitute for conditional compilation, imports, lambdas, metaprogramming, etc.
You don't have to use unneeded features that are in D. The core language is as simple as C, to the point where it is easy to translate C to D (in fact, the compiler will do it for you!).
"lots of klunky, ugly things"
I wonder how much of that is caused by too complex thinking. It seems that simplicity is very difficult for most people.
"You don't have to use unneeded features"
True, but that doesn't work in practice, for example the intentions to use only limited C++ features in new projects, that end up bogged down with the other features anyway because of "new toy to play with" effect.
What isn't there can't be used and keeps the language lean and clean (and not mean ;-) ).
D supports mixed C and D files in a project. The D code can call C functions and use C types, and the C code can call D functions using C types.
The D compiler will even translate C source code to D source code if you prefer! After using a mixed D/C program for a while, I bet you'll feel motivated to take the extra step and translate the C stuff to D, as the inelegance in C will become obvious :-)
Even simpler, you can do something like this to have length-delimited AND null-terminated strings (written from memory, no guarantees of correctness etc.):
Please don’t buy into “no const”. If you’ve ever worked with a lot of C/C++ code, you really appreciate proper const usage and it’s very obvious if a prototype is written incorrectly because now any callers will have errors. No serious reusable library would expose functions taking char* without proper const usage. You would never be able to pass a C++ string c_str() to such a C function without a const_cast if that were the case. Casting away const is and should be an immediate smell.
> In the absence of proper language support, “sum types” are just structs with discipline.
With enough compiler support they could be more than that. For example, I submitted a tagged union analysis feature request to gcc and clang, and someone generalized it into a guard builtin.
With proper discipline, one can even program a Turing machine directly. The problems are two: (1) Doing so is very slow and arduous, and (2) a chance of making a dangerous error is still quite high.
For instance, it appears that no amount of proper discipline, even in the best developers, allows to replace proper array support with a naked pointer to a memory area.
you can certainly wrap the array with a structure which provides either bounds information to be checked with generic runtime functions, or specific function pointers (methods) to get and set.
you can paper over _alot_ of Cs faults. ultimately its not really worth it, but its not nearly as fragile and arduous as you make it out to be
So that’s an interesting case. I’d really like to keep language neutrality, because I don’t think we’re finished evolving yet. So this is a place where we need an abi. The first things we try to do is be simple…except for a terrible mistake with select, we don’t send arrays across that interface, sadly, we send c structs sometimes and I think that’s pretty horrible, because we have to try to lay them out in a compatible way, which is pretty fragile. The other sad bit is that we need to verify the addresses before we can operate on them, and that’s hugely prone to error.
Im curious if you have a suggestion about how to fix both of those. The structure thing can clearly be a more robust serialization. Addresses? Idk
As a matter of course, every structure that may have a variable size should start with a length designator. Lengths 1 to 32767 take two bytes of a designator, 32768 to 2147483647 take four bytes, larger takes 8 bytes. Realistically 62 bits should suffice for any practical case, but arbitrary-size integers are well-known, and are easy to unpack and operate on.
This may slightly increase the size of some structures, but most of the time it would not, because of the alignment padding inherent to most structures anyway. But an entire class of vulnerabilities would be gone. This doesn't even need a change in the language, even though direct syntactic support would be nice. It just takes discipline when designing APIs.
The compiler's job is to program the turing machine for us. It should help as much as possible. For example, I really like using enums because compilers have extensive support for checking that all values have been handled in switch statements.
I don't like it when compilers start getting in the way though. We use C because we want to do raw things like point a structure at some memory area in order to access the data stored there. The compiler's job is to generate the expected code without screwing it up by "optimizing" it beyond recognition because of strict aliasing or some other nonsense.
It is on my list (also as a proposal to WG14). Sorry, I am a bit too overloaded currently. (If people want to help with such improvements - with either time or money, let me know.).
FWIW, Coverity (maybe others) has a checker that creates an error if it detects tagged union access without first checking the tag. It’s not as strict as enforcing which fields belong to which tag values, but it can still be useful. I’d much rather have what was proposed in the GCC bug!
I'm a huge fan of the 'parse, don't validate' idiom, but it feels like a bit of a hurdle to use it in C - in order to really encapsulate and avoid errors, you'd need to use opaque pointers to hidden types, which requires the use of malloc (or an object pool per-type or some other scaffolding, that would get quite repetitive after a while, but I digress).
You basically have to trade performance for correctness, whereas in a language like C++, that's the whole purpose of the constructor, which works for all kinds of memory: auto, static, dynamic, whatever.
In C, to initialize a struct without dynamic memory, you could always do the following:
struct Name {
const char *name;
};
int parse_name(const char *name, struct Name *ret) {
if(name) {
ret->name = name;
return 1;
} else {
return 0;
}
}
//in user code, *hopefully*...
struct Name myname;
parse_name("mothfuzz", &myname);
But then anyone could just instantiate an invalid Name without calling the parse_name function and pass it around wherever. This is very close to 'validation' type behaviour. So to get real 'parsing' behaviour, dynamic memory is required, which is off-limits for many of the kinds of projects one would use C for in the first place.
I'm very curious as to how the author resolves this, given that they say they don't use dynamic memory often. Maybe there's something I missed while reading.
> But then anyone could just instantiate an invalid Name without calling the parse_name function and pass it around wherever
This is nothing new in C. This problem has always existed by virtue of all struct members being public. Generally, programmers know to search the header file / documentation for constructor functions, instead of doing raw struct instantiation. Don‘t underestimate how good documentation can drive correct programming choices.
C++ is worse in this regard, as constructors don‘t really allow this pattern, since they can‘t return a None / false. The alternative is to throw an exception, which requires a runtime similar to malloc.
In C++ you would have a protected constructor and related friend utility class to do the parsing, returning any error code, and constructing the thing, populating an optional, shared_ptr, whatever… don’t make constructors fallible.
Sometimes you want the struct to be defined in a header so it can be passed and returned by value rather than pointer.
A technique I use is to leverage GCC's `poison` pragma to cause an error if attempting to access the struct's fields directly. I give the fields names that won't collide with anything, use macros to access them within the header and then `#undef` the macros at the end of the header.
Example - an immutable, pass-by-value string which couples the `char*` with the length of the string:
It just wraps `<string.h>` functions in a way that is slightly less error prone to use, and adds zero cost. We can pass the string everywhere by value rather than needing an opaque pointer. It's equivalent on SYSV (64-bit) to passing them as two separate arguments:
These DO NOT have the same calling convention. The latter is less efficient because it needs to dereference a pointer to return the out parameter. The former just returns length in `rax` and chars in `rdx` (`r0:r1`).
So returning a fat pointer is actually more efficient than returning a size and passing an out parameter on SYSV! (Though only marginally because in the latter case the pointer will be in cache).
Perhaps it's unfair to say "zero-cost" - it's slightly less than zero - cheaper than the conventional idiom of using an out parameter.
But it only works if the struct is <= 16-bytes and contains only INTEGER types. Any larger and the whole struct gets put on the stack for both arguments and returns. In that case it's probably better to use an opaque pointer.
That aside, when we define the struct in the header we can also `inline` most functions, so that avoids unnecessary branching overhead that we might have when using opaque pointers.
`#pragma GCC poison` is not portable, but it will be ignored wherever it isn't supported, so this won't prevent the code being compiled for other platforms - it just won't get the benefits we get from GCC & SYSV.
The biggest downside to this approach is we can't prevent the library user from using a struct initializer and creating an invalid structure (eg, length and actual string length not matching). It would be nice if there were some similar to trick to prevent using compound initializers with the type, then we could have full encapsulation without resorting to opaque pointers.
> The biggest downside to this approach is we can't prevent the library user from using a struct initializer and creating an invalid structure (eg, length and actual string length not matching). It would be nice if there were some similar to trick to prevent using compound initializers with the type, then we could have full encapsulation without resorting to opaque pointers.
Hmm, I found a solution and it was easier than expected. GCC has `__attribute__((designated_init))` we can stick on the struct which prevents positional initializers and requires the field names to be used (assuming -Werror). Since those names are poisoned, we won't be able to initialize except through functions defined in our library. We can similarly use a macro and #undef it.
Full encapsulation of a struct defined in a header:
Aside from horrible pointer aliasing tricks, the only way to create a `string_t` is via `string_alloc_from_chars` or other functions defined in the library which return `string_t`.
#include <stdio.h>
int main() {
string_t s = string_alloc_from_chars("Hello World!");
if (string_is_valid(s))
puts(string_to_chars(s));
string_free(s);
return 0;
}
If you really insist on not having a distinction between "u8"/"i8" and "unsigned char"/"signed char", and you've gone to the trouble of refusing to accept CHAR_BIT!=8, I'm pretty sure it'd be safer to typedef unsigned char u8 and typedef signed char i8. uint8_t/int8_t are not necessarily character types (see 6.2.5.20 and 7.22.1.1) and there are ramifications (see, e.g., 6.2.6.1, 6.3.2.3, 6.5.1).
> and you've gone to the trouble of refusing to accept CHAR_BIT!=8
This one was a head-scratcher for me. Yeah, there's no cost to check for it, but architectures where CHAR_BIT != 8 are rarer even than 24-bit architectures.
I got the impression the author was implying because CHAR_BIT is enforced to be 8 that uint8_t and char are therefore equivalent, but they are different types with very different rules.
E.g. `char p = (char )&astruct` may violate strict aliasing but `uint8_t p = (uint8_t )&astruct` is guaranteed legal. Then modulo, traps, padding, overflow, promotion, etc.
With the disclaimer that I let my language lawyer qualification lapse a while ago, it's broadly to do with the character types being the only approved way to examine the bytes of an object. An object of a type can be accessed only as if it were an object of that type or some compatible type, but: it can also be accessed as a sequence of characters. (You'd do this if implementing memcpy, memset or memcmp, for example.)
6.2.6.1 - only character types can be used to inspect the sequence of bytes making up an objuect, and (interestingly) only an array of unsigned char is suitable for memcpy'ing an object into for inspection. It's possible for sequences of bytes to exist that don't represent a valid value of the original object; it's undefined behaviour to read those sequences of bytes other than via a character type (i.e., I think, via a pointer to something compatible with the object's actual type - there being no other valid ways to even attempt to read it)
6.3.2.3 - when casting a pointer to an object type to a pointer to a character type, the new character pointer points to the bytes of the object. If converting between object types, on the other hand, the original pointer will (with care) round trip, and that seems to be all you can do, and actual access is not permitted
6.5.1 - as well as all the expected ways of accessing an object, objects can be accessed via a character pointer
Well in 33 years it has learnt nothing about memory safe programming, at least C++ provides the tooling for those that care, before even goverments decided to act upon it.
Solid list. The bit about avoiding the preprocessor as much as possible really resonates—using `static inline` functions and `enum` instead of macros makes debugging so much less painful. What's your take on using C11's `_Generic` for type-generic macros? It adds some verbosity but can save you from a lot of runtime type errors.
Regarding memory, I recently changed to try to not use dynamic memory, or if I need to, to do it once at startup. Often static memory on startup is sufficient.
Instead use the stack much more and have a limit on how much data the program can handle fixed on startup. It adds the need to think what happens if your system runs out of memory.
Like OP said, it's not a solution for all types of programs. But it makes for very stable software with known and easily tested error states. Also adds a bit of fun in figuring out how to do it.
As someone who spent most of their career as an embedded dev, yes, this is fine for (like parent said) some types of software.
Even for places where you'd think this is a bad idea, it's still can be a good approach, for example allocating and mapping all memory up to the limit you are designing. Honestly this is how engineering is done - you have specified limits in the design, and you work explicitly to those limits.
So "allocate everything at startup" need not be "allocate everything at program startup", it can be "allocate everything at workflow startup", where "workflow" can be a thread, a long-running input-directed sequence of functions, etc.
For example, I am starting a tiny stripped down web-server for a project, and my approach is going to be a single 4Kb[1] block for each request, allocated via a pool (which can expand on pressure up to some maximum) and returned to the pool once the response is sent.
The 4Kb includes at most 14 headers (regardless of each headers size) with the remaining data for the JSON payload. The JSON payload is limited to at most 10 fields. This makes parsing everything "allocate-less" because the array holding pointers to the keys+values of the header is `const char *headers[14]` and to the payload JSON data `const char *fields[10]`.
A request that doesn't fit in any of that will be rejected. This means that everything is simple and the allocation for each request happens once at startup (pool creation) even while parsing the input.
I'm toying with the idea of doing the same for responses too, instead of writing it out as and when the output is determined during the servicing of the request.
-------------------------
[1] I might switch to 6Kb or 8Kb if requests need more; whatever number is chosen, it's going to be a static number.
In recent years I had to write some firmware code with C and that was exactly the approach I took. So far I never had need for any dynamic memory and I was surprised how far I can get without it.
This is the way. Allocate all memory upfront. Create an allocator if you need to divy it up dynamically. Acquire all resources up front. Try to fit everything in stack. Much easier that way.
Only allocate on the heap if you absolutely have to.
Dynamic memory allocation solves the problem of dynamic business requirements.
If you know your requirements up front, static memory initialisation is the way.
For instance, indexing a typed array with an enum is no different then an unordered map of string to int, IF you have all your business requirements up front
I've been looking into Ada recently and it has cool safety mechanisms to encourage this same kind of thing. It even allows you to dynamically allocate on the stack for many cases.
True, but in many environments where C is used the stacks may be configured with small sizes and without the possibility of being grown dynamically.
In such environments, it may be needed to estimate the maximum stack usage and configure big enough stacks, if possible.
Having to estimate maximum memory usage is the same constraint when allocating a static array as a work area, then using a custom allocator to provide memory when needed.
Sure, the parent was commenting more about the capability existing in Ada in contrast to C. Ada variable length local variables are basically C alloca(). The interesting part in Ada is returning variable length types from functions and having them automatically managed via the “secondary stack”, which is a fixed size buffer in embedded/constrained environments. The compiler takes care of most of the dirty work for you.
We mainly use C++, not C, and we do this with polymorphic allocators. This is our main allocator for local stack:
alloca is certainly worse. Worst-case fixed size array on the stack are also worse. If you need variable-sized array on the stack, VLAs are the best alternative. Also many other languages such as Ada have them.
I have some firmware that runs an event loop. There is no malloc anywhere. But I do have an area which gets reset event handler after each call. Useful for passing objects up the call stack.
One other thing I tend to do anything that needs to live longer than the current call stack gets copied into a queue of some sort. I feel it's kinda doing manually what rusts borrow checker tries to enforce.
Not distracting at all, it feels nostalgic to me. Id rather have these flashy things than a million popups and registration forms following you around, which is basically the modern web. I hate it so much. This site is pure balsam for my soul.
> I don’t personally do things that require dynamic memory management in C often, so I don’t have many practices for it. I know that wellons & co. Have been really liking the arena, and I’d probably like it too if I actually used the heap often. But I don’t, so I have nothing to say.
> If I find myself needing a bunch of dynamic memory allocations and lifetime management, I will simply start using another language–usually rust or C#.
I'm not sure what the modern standards are, but if you are writing in C, pre-allocate as much as possible. Any kind of garbage collection is just extra processing time and ideally you don't want to run out of memory during an allocation mid-execution.
People may frown at C, but nothing beats getting your inner loops into CPU cache. If you can avoid extra fetches into RAM, you can really crank some processing power. Example projects have included computer vision, servers a custom neural network - all of which had no business being so fast.
Two things I thought while reading the post:
Why not typedef BitInt types for stricter size and accidental promotion control when typedeffing for easier names anyway?
I came across a post mentioning using regular arrays instead of strings to avoid the null terminatorand off-by-one pitfalls.
I still have a lot of conversion to do before I can try this in my hobby project, but these are interesting ideas.
Even if the code might not end up requiring it, if you write it with the assumption that bytes are 8 bits, it's good to document that with a static assert so someone porting things knows there will be dragons
It's a pretty neat way to drop some corner cases from your mental load without building subtle traps
That's pretty silly IMHO, it should be incredibly obvious to anybody who is ever in a position to port code to a machine with non-8-bit-bytes that there will be dragons there. It also requires including limit.h which you might not otherwise need.
It's just not a realistic edge case, the machines like this are either antiquated or are tiny microcontrollers that can't practically run a POSIX OS. Very little code in the real world is generic enough to be useful in that environment (a good example might be a fixed point signal processing library).
There is no assertion in the entire Linux kernel that CHAR_BIT is eight, despite that assumption being hardcoded in many places.
> Additionally, the intent of whether the buffer is used as “raw” memory chunks versus a meaningful u8 is pretty clear from the code that it gets used in, so I’m not worried about confusing intent with it.
It's generally not clear to the compiler, and that can result in missed optimization opportunities.
I really dislike parsing not validating as general advice. IMO this is the true differentiator of type systems that most people should be familiar with instead of "dynamic vs static" or "strong vs weak".
Adding complexity to your type system and to the representation of types within your code has a cost in terms of mental overhead. It's become trendy to have this mental model where the cost of "type safety" is paid in keystrokes but pays for itself in reducing mental overhead for the developers. But in reality you're trading one kind of mental overhead for another, the cost you pay to implement it is extra.
It's like "what are all the ways I could use this wrong" vs "what are all the possibilities that exist". There's no difference in mental overhead between between having one tool you can use in 500 ways or 500 tools you can use in 1 way, either way you need to know 500 things, so the difference lies elsewhere. The effort and keystrokes that you use to add type safety can only ever increase the complexity of your project.
If you're going to pay for it, that complexity has to be worth it. Every single project should be making a conscious decision about this on day one. For the cost to be worth it, the rate of iteration has to be low enough and the cost of runtime bugs has to be high enough. Paying the cost is a no brainer on a banking system, spacecraft or low level library depended on by a million developers.
Where I think we've lost the plot is that NOT paying the cost should be a no brainer for stuff like front end web development and video games where there's basically zero cost in small bugs. Typescript is a huge fuck up on the front end, and C++ is a 30 year fuck up in the games industry. Javascript and C have problems and aren't the right languages for those respective jobs, but we completely missed the point of why they got popular and didn't learn anything from it, and we haven't created the right languages yet for either of those two fields.
Same concept and cost/benefit analysis applies to all forms of testing, and formal verification too.
While I broadly agree with your general point, in that engineering is making a set of trade-offs, I don't necessarily agree that ditching type-safety in the example contexts you posted is the appropriate trade-off.[1]
I'll ditch type-safety in experimental/exploratory code; I'll use Lisp (or, more recently, Python) to test if something is a good idea. For anything that ships to production, I think a basic level of type enforcement is necessary, even if you don't want the whole type zoo.
For your Javascript f/end context, I like the proposed TC39 approach (https://github.com/tc39/proposal-type-annotations?tab=readme...). The typing is optional, does not break existing syntax and can still be used to enforce a basic level of type safety if the developer wants it.
----------------------------
[1] I upvoted you anyway. Your broader point is still valid.
I'm not talking about ditching type safety. I'm saying the whole concept of "safe" and "unsafe" as most people on HN understand it is flawed. The interesting part of a type system isn't whether the compiler checks types or if we just go lmao fuck it let's not even bother, it's whether or not you need to represent the types in your code in order for the compiler to check them. For the majority of what people want from type safety in a language like Javascript, the answer is that no, you don't need to, as long as you're willing to not have every single language feature under the sun.
With compiled languages you can statically infer a ton of type information without having to pepper your codebase with repeated references to what something is. Nominal typing essentially boils down to a double-check of your work, you specify the type separately and then purposely assign it to a variable, so that if you make a mistake with either part the compiler picks it up.
But those kinds of double-checks can be done for almost anything (outside of dynamic boundaries like io/dlls) without nominal type signatures in the code, as long as you jettison the ability to change types at runtime. No language as far as I can tell actually does this because we're all so obsessed with the false dichotomy of nominal and dynamic typing.
In JS everyone likes to use string unions in place of enums so let's use that as an example. If you have something that is only ever set as "foo" or "bar", that's effectively a boolean. If you receive that string in another function, make a typo and write if (str == "boo"), then in every single language I'm aware of that passes a compiler check. But it shouldn't, because the compiler has all the information it needs to statically catch that error and fail the build. The set of assignments to that variable and the set of equality checks on it provide the two parts of the double-check.
In a perfect world we'd have 10 of these "middle of the road" strongly typed static languages to choose from that all optimise for minimal type representation in their own unique way. But every time I see one of these projects pop up on HN it gets like 10 comments then disappears into the sunset because the programming community is so enraptured with the nominal type system of C and all the fucking bullshit Bjarne Stroustrup pasted on top of it 40 years ago. So we end up with this silly situation where the only things considered "safe" by the crowd are strict descendants of C/C++ with the array/pointer/string screw-ups that made those languages unsafe removed.
> I think one of the most eye-opening blog posts I read when getting into programming initially was the evergreen parse, don’t validate post
Bro, that was written in 2019. If it's not old enough to drink it's not yet evergreen. But it's also long-winded. A 25-minute read, and y'know what the conclusion is? "Parsing leaves you with a new data structure matching a type, validation checks if some data technically complies with a type (but might not later be parsed correctly)".
I need all the baby programmers in the back to hear me: type systems are bikeshedding. The point of a type is only to restrict computation to a fixed set. This concept can be applied anywhere you need to ensure reliability and simplicity. You don't need a programming language to natively support types in order to implement the concept yourself in that language.
> You don't need a programming language to natively support types in order to implement the concept yourself in that language.
In a programming language that doesn't enforce types, how do you implement
> "Parsing leaves you with a new data structure matching a type, validation checks if some data technically complies with a type (but might not later be parsed correctly)".
There are also some things in C that do not work or work differently in C++, such as (void*), empty structures (which in C++ are not really empty), etc; and there is also such C++ stuff such as name mangling, the C++ standard library, etc, even if those things are not a part of your program, which is another reason why you might prefer C.
It is like those folks that rather write JSDoc comments than using a linter like Typescript, because reasons.
Given the C++ adoption on 1990's commercial software and major consumer operating systems (Apple, IBM, Microsoft, Be), I bet if the FSF with their coding guidelines had not advocated for C, the adoption would not taken off beyond those days.
"Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. So please write in C."
> C23 + <compiler C extensions> is hardly simpler as people advocate.
Well, certainly simpler than C++, at any rate.
I mean, just knowing the assignment rules in C++ is worthy of an entire book on its own. Understandably, the single rule of "assignment is a bitwise copy of the source variable into the destination variable" is inflexible, but at least the person reading the local code can, just from the current scope, determine whether some assignment is a bug or not!
In many ways, C++ requires global context when reading any local scope: will the correct destructor get called? Can this variable be used as an argument to a function (a lack of a copy constructor results in the bitwise copy for on stack, with the destructor for that instance running twice - once in the stack and again when the scope ends)? Is this being passed by reference (i.e. it might be modified by the function we are calling) or by value (i.e. we don't need to worry about whether `bar` has been changed after a call to `foo(bar)`).
Many programmers don't like holding lots of global scope in their head when working in some local scope. In C, all those examples above are clear in the local scope.
All programmers who prefer C over C++ have already tried C++ in large and non-trivial projects before walking away. I doubt that the reverse is true.
Where do you think the first generations from C++ programmers come from?
There is this urban myth C is simple, from folks that never read either ISO C manual, can't read legalese, never spent much time browsing the compiler reference manual.
Mostly learnt K&R C, assume the world is simple, until the code gets ported into another platform or compiler.
Yet in such a simple language, I keep waiting to meet the magical developer that never wrote memory corruption errors with pointer arithmetic, string and memory library functions.
> There is this urban myth C is simple, from folks that never read either ISO C manual, can't read legalese, never spent much time browsing the compiler reference manual.
And yet you know from previous discussion with folks like Uecker and myself have done all those things, and still walked away from C++.
In my case, I stepped back even after having a decade of work experience in it. Anything needing more abstraction than C, C++ is not going to be a good fit anyway (there's better languages).
> Yet in such a simple language, I keep waiting to meet the magical developer that never wrote memory corruption errors with pointer arithmetic, string and memory library functions.
Who made that claim? This sounds like a strawman - "If you use C you'll never make this class of errors", which no one said in this conversation.
In any case, the point is even more true of C++ - I have yet to meet this magical C++ programmer that never hits the few dozens of footguns it has that C doesn't.
> Internet is full of people asserting CVEs in C are only caused by not skilled enough devs.
Sure, but those people are not here, and usually aren't on HN anyway.
The internet is also full of people asserting that CVEs in C++ are only caused by not skilled enough devs, but I consider those people irrelevant too.
The reasons for rejecting C++ in this forum have been repeated often enough that you should have seen them by now: C++ has major systemic problems that don't exist in many other languages, including C.
It should be no surprise to you, at this point, that people choose almost anything over C++. The fact that "anything" also includes "C" is mostly incidental.
No one is asserting that they reject C++ because C is better, they typically reject it for concrete reasons, like the ones I pointed out upthread.
> Might be, then again C23 isn't K&R C that many still learn from.
I agree with this, but then again, not many people are learning C now anyway. It will die away from natural attrition anyway, is my point.
The K&R C does have a few advantages, because the compilers at the time were not so aggressive in optimisation, and will consistently emit code that (for example) performed a NULL dereference (or other UB), ensuring things like consistently crashing instead of silently losing data/doing the wrong thing.
I avoid doing this now. It's more trouble than it's worth and it changes your code from a standard dialect of C into a custom one. Plus my eyes are old and they don't enjoy separating short identifiers.
> typedef struct { ... } String
I avoid doing this. Just use `struct string { ... };'. It makes it clear what you're handling. C23 finally gave us "auto", you shouldn't fret over typedefing everything anymore. I also prefer a "strbuf" type with an index and capacity so I can safely read and write to it with a derived "strview" having pointer and length only which references into the buffer.
> returning results
The general method of returning structures larger than two machine words is fairly inefficient. Plus you're cutting yourself off from another C23 gem which was [[nodiscard]]. If you want the 'ok' value checked then you can _really_ specify that. Put everything else behind a pointer passed in an argument. The sum type logic works just as well there.
> I tend to avoid the string.h functions most of the time, only employing the mem family when I want to, well, mess with memory.
So you use strlen() a lot and don't have to deal with multibyte characters anywhere in your code. It's not much of a strategy.
reply