Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the larger problems is purely social. Some people unnecessarily resist the idea that code can run on something other than x86 (and perhaps now Arm).

Interestingly, some apologists say that maintaining portability in code is a hinderance that costs time and money, as though the profit of a company or the productivity of a programmer will be dramatically affected if they need to program without making assumptions about the underlying architecture. In reality, writing without those assumptions usually makes code better, with fewer edge cases.

It'd be nice if people wouldn't go out of their way to fight against portability, but some people do :(



I argue the opposite: It's important to know the purpose of the program, including the computer that the program will run on.

I can understand polite disagreements about target architectures: You might write a windows desktop program and disagree with someone if you should support 32-bit or be 64-bit only.

BUT: Outside of x64 and ARM, your program (Windows Desktop) is highly unlikely to run on any other instruction set.

The reality is that it's very time consuming and labor-intense to ship correctly-working software. Choosing and limiting your target architecture is an important part of that decision. Either you increase development cost/time (and risk not getting a return on your investment,) or you ship something buggy and risk your reputation / support costs.


> windows

Everything you write makes sense in the context of Windows, but only for Windows. That's certainly not true for open source non-Windows specific software.


> That's certainly not true for open source non-Windows specific software.

Well, there's a lot of nuance in such a statement.

If you're developing an open-source library; then it's up to the vendor who uses your software to make sure that they test according to the intended purpose of the program.

If you're developing an open-source program for end users, then it's also "important to know the purpose of the program, including the computer that the program will run on." This means being very clear, when you offer the program to end users, your level of testing / support.

For example, in one of my programs: https://github.com/GWBasic/soft_matrix?tab=readme-ov-file#su...

> I currently develop on Mac. Soft Matrix successfully runs on both Intel and Apple silicon. I have not tested on Windows or Linux yet; but I am optimistic that Soft Matrix will build and run on those platforms.

Now, for Soft Matrix I developed a Rust library for reading / writing .wav files. (https://github.com/GWBasic/wave_stream) I've never made any effort to test on anything other than Mac (Apple Silicon / Intel.) If someone wants to ship software using 32-bit Linux on PowerPC, it's their responsibility to test their product. Wave_stream is just a hobby project of mine.


I actually think that's the point, though: it isn't about testing, it is about not going out of your way to make assumptions. Clearly, your code isn't riddled with #ifdef macOS, or you wouldn't be saying it probably works on Windows. And, as you coded it in Rust using Cargo, as annoying as I may sometimes find their cross compilation behavior, it by and large works: if I take your code I bet I can just ask it to compile for something random, and I'll see what happens... it might even work, and you don't actively know of obvious reason why it can't.

In contrast, a lot of libraries I use--hell: despite myself caring about this, I have become part of the problem (though I'd argue in my defense it is largely because of my dependencies making it hard to even build otherwise)--come with a bespoke build system that only even is able to compile for a handful of targets and where the code is riddled with "ifdef intel elif arm else error" / "ifdef windows elif linux elif apple else error". I can pretty much tell you: my code isn't going to work on anything I didn't develop it for, and would require massive intervention to support something even as trivial sounding as FreeBSD, because none of the parties involved in its code really cared about making it work.

So like, while you might be arguing against the premise, in practice, you could be the poster-child for the movement ;P. You developed your software using an ecosystem that values portability, you didn't go out of your way to undermine such, and you aren't actively poo-pooing people who might want to even try it on other systems (as say every major project from Google, where they either consider a platform to be fully tested and supported or something no one should discuss or even try to fix) you're merely warning them it wasn't tested, and I frankly bet if I showed up with a patch that fixed some unknown issue compiling it for RISC-V you'd even merge the change ;P.


I spent over a decade developing cross-platform software.

Even now (professionally) I'm getting back into it. My current job is in C#, and we're a little late to the .Net 8 game. It's much cheaper to host .Net 8 on Linux than Windows, so now we're getting into it.

Turns out timezone names are different between Windows and Linux; and our code uses timezone names.


Also, develop on your target architecture. I see too many people trying to use their arm macs to develop locally. Just ssh into an x86 thing


I am for portability but if you decide to go that route, you need to do a ton of testing on different platforms to make sure it really works. That definitely costs time and effort. There are a lot of ways to do it wrong.


Even setting up your automated tests on multiple platforms can cost a ton of money on CI infra alone. Besides increased server costs some architectures cost a lot more to rent on cloud-CI systems (cough macOS) and setting up the local infra to run tests on-prem can be a huge endeavor and each new platform added has fixed costs.

Nevermind your cloud CI randomly dropping support for one of your archs.


> In reality, writing without those assumptions usually makes code better, with fewer edge cases.

Writing code for portability is usually the easy part. Testing your software on multiple architectures is a lot more work though.


And more to the point, just because you write for portability doesn't guarantee that it will work. If you're not able to test on multiple architectures, then you're not really able to assert that your code is actually portable.

There's just no business case for portability. Writing for portability makes your code more difficult to read (because you have to include all these edge cases) and maintain, and the business needs to pay for hardware for the additional architectures to test on. For what? Some nebulous argument that it'll help catch exploits? Most InfoSec departments have far better bang-for-the-buck projects in their backlog to work on.


Who said anything about asserting that your code is actually portable?

There are two extremes here, and not being one doesn't mean the other:

1) Writing code with bad coding practices where alignment, endianness, word size, platform assumptions, et cetera, are all baked in unapologetically, saying that doing anything else would incur financial cost

2) Writing code that's as portable as possible, setting up CI and testing on multiple platforms and architectures, and asserting that said code is actually portable

One can simply write code that's as portable as possible and let others test and/or say when there's an issue. That's quite common.

As someone who compiles software on architectures that a good number of people don't even know exist, I think it's absolutely ridiculous to suggest that portability awareness without multi-platform and multi-architecture CI and testing isn't worthwhile.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: