Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who works on the innards of a cooperative multitasking embedded OS for a living, I am sick and tired of C and C++.

Closures is a huge one. Everything is async in HW land, an interrupt fires and calls you when data is ready, so doing a read looks like

    MyOpInitialFunction()
    {
        // Business Logic
        AsyncReadFromFlash(sDataBuffer, sizeOfBuff, sizeToRead MyOpFirstCallback, contextVar);
    }

    MyOpFirstCallback(byte* dataBuf, size_t sizeRead, void* context)
    {
        //Business logic, use data read in some how
        AsyncReadFromFlash(sDataBuffer, sizeOfBuff, sizeToRead MyOpSecondCallback, contextVar);
    }
    
    MyOpSecondCallback(byte* dataBuf, size_t sizeRead, void* context)
    // and so on and so forth
It gets boring really fast. The system has a nice DMA controller that we use a lot, so a similar pattern exists for copying large chunks of data around.

You can actually implement a variant of closures in C using Macros, and right now the entire team wishes we had done that! (It isn't hard, in embedded land you have complete memory control, you can just memcopy the entire stack down to a given known location and memcopy it back to resume!)

Rust has a lot of features that are really useful for embedded, the problem is that it is going to be a long time, if ever, that the embedded world switches over.

Embedded systems tend to have very little code space, so efficiency of generated code is a must. LLVM and even GCC aren't as efficient as ARM's (horribly overpriced, feature lacking) compilers. The one thing ARM's compilers do though is minimize code space, and they do a very good job at it.

The other issue is that any sort of language run time is questionable. Even the C Run Time is typically not included in embedded (teams opt out of using it), if you want printf you typically write whatever subset of printf functionality you want!

Oddly enough memory safety is not a real issue, though slightly better buffer overrun protection than what we get from static analysis would be nice!

Consumers hand over a destination buffer to Producers, who return it in a callback. Because of the above async pattern, the consumer isn't even running until the data is ready.

That's solved 99% of our issues. :) (If you want someone else's data, you ask for it and get it passed in a buffer you provide!)



Any thoughts on C++ coroutines? Microsoft has a draft implemented in their compiler, and there's a second competing proposal.

https://msdn.microsoft.com/en-us/magazine/mt573711.aspx

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n445...

Ideally we'd implement something similar in Rust.


Embedded land is stuck with C++ 2003.

It sort of sucks.

(This is a world where v-tables are still a concern, they take up way too much code space if inheritance is allowed to get out of hand.)

Allocation is basically verbotin. This line of code is very real in embedded:

  #define malloc assert
Back to being on topic, current C++ template, with futures et al., is horrible and may cause eye cancer, Microsoft's final code is obviously cleaner. (The counter proposal is much better written though, they cover a lot more information.)

Now, if I am in happy sappy land where I get my ideal languages, it'd look something like this:

    CopyBuffers(//... assume stuff is passed in)
    {
        bytesRead = 0;
        while(bytesRead < sizeOfFile)
        {
            error, bytesThisLoop = AsyncRead(fileHandle, &buffer, bufferSize);
            if( error )
            {  // do stuff }
            if( bytesReadThisLoop == 0 ) // something has gone wrong, error should have caught it
            { // Something is wrong }
                
            bytesRead += bytesThisLoop;
            
            error, bytesWritten = AsyncWrite(destHandle, &buffer, bytesRead);
            if( error )
            {  // do stuff }
            if( bytesWritten != bytesRead )
            { // Something may be wrong, depending on our write contract }
    }
Someplace in AsyncWrite my entire state is captured and handed off to someone. The counter proposal suggests I handle the state myself, which is fine (but not as cool looking!)

Now on to said competing paper;

"Memory usage is exactly that required to represent the resumable function’s maximum stack depth,"

Which for a cooperative multitasking system, that is pretty well defined, you have a while(1) somewhere dispatching tasks! (Or an equivalent thread/fibers/etc manager.)

I agree that the "up and out" model leads to more boilerplate. Although I actually like requiring use of "await" to indicate that calling this function is going to pause the thread, but I'd like it used purely as an FYI to the programmer that is compiler enforced to be put in place. (I'm not a fan of surprises.)

They have an idea of a resumable object, which I think they take too long explaining, all coroutine systems have this, you have to at least copy the stack off to somewhere.

"Resumable objects are non-copyable and non-moveable"

Sort of sad, I can think of some very fun abuses, but since 99% of them would result in a crash... ;)

So overall, and this is the part I am not clear about, they want to restrict this to resumable expressions. From what I grok, that seems to mesh with my request for "await" being enforced. (though here it seems needed).

The final syntax at the end is a bit ugly, I am not a fan of out parameters. Ignoring that, it may be less work overall on the programmer to get the base case up and running, I don't want to have to wrap all my async functions in adapter classes, it is nice if the compiler does it for me.

The more complex cases get ugly though. I'd like to see a really simple non-template example of just how to read from a file from the consumer's perspective.

IMHO that is 90% of the problem space for async. "IO is slow, get back to me when you're done."

Thanks for leading me down this hole! :)


Yeah, I'm not brave enough for embedded.

I'm still pretty excited about what's happening in this space though. Thanks for your thoughts.


In some ways embedded is easier, in others much harder. Doing it all on your own is hard, but if you have a good team backing you I think it is actually easier than trying to write a large desktop app in native.

No OS to get in your way. No decades of legacy APIs to clog things up. Because the entire world is understandable, no need to be concerned about someone else's thread ruining your day.

All your I/O channels are well measured and have know latency and throughput. If you are lucky enough enough to work with SRAM, then you even get a nice constant access speed to memory.

Interrupt routines are a fancy way to say "callback" and Direct Memory Access is a nice way to say "memcopy without CPU overhead".

That said, the initial getting up and running part is hard, that is why lots of pre-existing systems with their own little runtime exist.


Re: ripping out libc, the folks at https://gandro.github.io/2015/09/27/rust-on-rumprun/ seem to have had success in replacing Rust's libc usage with their own restricted implememtation, and have expressed interest in providing a fork of libstd that bypasses libc entirely and just does raw system calls (which itself probably wouldn't be useful for your purposes, but at least proves the concept).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: