Multi app works pretty well too, when I need to cross reference between apps throwing them each up on the split halves is way better than swapping back and forth.
Flatbuffers lets you directly mmap from disk, that trick alone makes it really good for use cases that can take advantage of it(fast access of read-only data). If you're clever enough to tune the ordering of fields you can give it good cache locality and really make it fly.
We used to store animation data in mmaped flatbuffers at a previous gig and it worked really well. Kernel would happily prefetch on access and page out under pressure, we could have 10s of MBs of animation data and only pay a couple hundred kb based on access patterns.
One capability mechanism that's in wide use but not really well known or touched on in the article is Androids RPC mechanism, Binder(and a lot of the history predates Android from what I recall).
Binder handles work just like object capabilities, you can only use what's sent to you and process can delegate out other binder handles.
Android hides most of this behind their permission model but the capability still exist and can be implemented by anyone in the system.
Yes, and macOS/iOS have XPC which is similar to the Binder. Binder is a BeOS era thing. Parts of Android were written by former Be engineers so the API terminology is the same (binders, loopers, etc).
Binder is also somewhat like Mojo in that you can do fast in-process calls with it, iirc. The problem is that, as you note, this isn't very useful in the Android context because within a process there's no way to keep a handle private. Mojo's ability to move code in and out of processes actually is used by Chrome extensively, usually either for testing (simpler to run everything in-process when debugging) or because not every OS it runs on requires the same configuration of process networks.
That only applies when dynamic dispatch is involved and the linker can't trace the calls. For direct calls and generics(which idiomatic Rust code tends to prefer over dyn traits) LTO will prune extensively.
Depends on what is desired, in this case it would fail (through the `?`), and report it's not a valid HTTP Uri. This would be for a generic parsing library that allows for multiple schemes to be parsed each with their own parsing rules.
If you want to mix schemes you would need to be able to handle all schemes; you can either go through all variations (through the same generics) you want to test or just just accept that you need a full URI parser and lose the generic.
See, the trait system in Rust actually forced you to discover your requirements at a very core level. It is not a bug, but a feature. If you need HTTPS, then you need to include the code to do HTTPS of course. Then LTO shouldn't remove it.
If your library cannot parse FTP, either you enable that feature, add that feature, or use a different library.
That assumes that people know what they're doing in C/C++, I've seen just as many bloated codebases in C++ if not more because the defaults for most compilers are not great and it's very easy for things to get out of hand with templates, excessive use of dynamic libraries(which inhibit LTO) or using shared_ptr for everything.
My experience is that Rust guides you towards defaults that tend to not hit those things and for the cases where you really do need that fine grained control unsafe blocks with direct pointer access are available(and I've used them when needed).
Is there a name for a fallacy like "appeal to stupidity" or something where the argument against using a tool that's fit for the job boils down to "All developers are too dumb to use this/you need to read a manual/it's hard" etc etc?
I think there is something to be said about having good defaults and tools that don't force you to be on every last detail 100% lest they get out of control.
It also depends on the team, some teams have a high density of seasoned experts who've made the mistakes and know what to avoid but I think the history on mem vulns show that it's very hard to keep that bar consistently across large codebases or disperse teams.
This is ultimately the crux of the issue. If Google, Microsoft, Apple, whatever, cannot manage to hire engineers that can write safe c/c++ all the time (as has been demonstrated repeatedly), it’s time to question whether the model itself makes sense for most use cases.
Grandparent can’t argue that these top tier engineers aren’t RTFM here. Of course they are. Even after the manual reading they still cannot manage to write perfectly safe code. Because it is extremely hard to do
Personally my argument would be the problems at the low level are just hard problems and doing them in rust you'll change one set of problems of memory safety to another set of problems probably of unexpected behaviour with memory layouts and lifetimes at the very low level.
It's not that all developers are dumb/stupid. It's that even the smartest developers make mistakes and thus having a safety net that can catch damaging mistakes is helpful.
I've read several posts here where people say things like "this is badly designed becausw it assumes people read the documentation".
???????
Yes you need to read the docs. That is programming 101. If you have vim set up properly then you can open the man page for the identifier under your cursor in a single keypress. There is ZERO excuse not to read the manual. There is no excuse not to check error messages. etc.
Yet we consistently see people that want everything babyproofed.
On the other hand, there's no excuse for designers & developers (or their product manager, if that's the one in authority) not to work their ass off on the ergonomics/affordance of the tools they release to any public (be it end users or developers, which are the end users of the tool makers, etc.).
It benefits literally everyone: the users, the product reputation & value, the builders reputation, the support team, etc.
Do people read the docs? Often, no, they don't. So, are you creating tools for the people we have, or for the people you think we should have? If the latter, you are likely to find that your tool makes less impact than you think it should.
Computer languages are not tools for illiterates. You need to learn what you're doing. And yet, programmers do so less than we think they should. If we don't license programmers (to weed out the under-trained), then we're going to have to deal with languages being used by people who didn't read the docs. We should give at least some thought to having them degrade gracefully in that situation.
RAII is fine when it is the right tool for the job. Is it the right tool for every job? Certainly there are other more or less widely practiced approaches. In some situations you can come up with something that is provably correct and performs better (in space and/or time). Then there are just trade-offs.
WDT patterns are highly underrated, even in pure software there's value in degrading/recovering gracefully vs systems that have to be "perfect" 100% of the time and then force user intervention when they go wrong.
One of my favorite blogs on the topic https://ferd.ca/the-zen-of-erlang.html that does a great job of covering how Erlang approached the topic, lots of learnings that can be applied more broadly.
Both are true, as praise is absolutely a currency that can be devalued like any other.
Given with abandon, even honestly, it loses its value to both the disponer and recipient. That's something that many in management roles never appreciate and is one of the reasons that some give it too sparingly. They've found that abundant praise loses its utility and come to the incorrect conclusion that scant praise is best.
You could already talk freely to the MQTT on the printer and it was already secured with a unique password. This feels like making it a second class feature that could disappear at a future point.
They’ve suffered real brand damage. Any of the changes (original, or these) seem like they would win over unconvinced potential customers, yet they’ve actively turned some away.
I don't really see what having a "developer mode" offers here beyond the existing solution. The current mqtt is already locked down with a unique password and AFAIK the endpoint was read-only anyway.
Don't get me wrong I'm glad they're responding to feedback but the feedback shouldn't have been required in the first place.
I'm all for better security on products(esp ones that heat up to 300C!) but interoperability with open standards makes it a better product overall and given the direction we've seen in the IoT space I think they've done quite a bit of damage(even if not intentionally) by not taking more care in this area.
Developer mode is just "how it works today" mode. It's insecure, and uses private APIs, and thus shouldn't be used, but people will anyway, so they're listening to their customers.
> Developer mode is just "how it works today" mode.
For LAN mode yes, but for how the printer works — not exactly. Right now you can print from a 3rd party slicer (Orca Slicer) and at the same time use Bambu Handy mobile app to for example monitor the print. LAN mode disables the cloud connection (always has). Which means that with the revised changes you have to choose either a 3rd party slicer _or_ active cloud connection
Adapting to Bambu Connect means passing the print files to another app, and it's unclear if that app will work without a cloud account. Plus Orca does lose functionality like controlling the printer directly from the slicer, and one of the critiques of the new system is needing a separate app in the first place.
If you mean that OrcaSlicer will reimplement Bambu Connect protocols inside — I doubt that will happen, since Bambu Connect is not open source, so this would involve reverse engineering the protocols and potentially including Bambu Connect certificates.
Yeah that's a huge bummer if so, I've got both a HA automation that shows the printer status without needing to have an app installed and I've got a secondary filtration system that's fully automated which would be a PITA if I had to manage manually.
Totally understand if it's something that could change/break in future updates but the language about it being "exploited" is a bummer, you would think extending/documenting that would actually drive further adoption of the printers by building a more robust ecosystem around them.