Hacker Newsnew | past | comments | ask | show | jobs | submit | arianvanp's commentslogin

If you care about this stuff you need to in-house auditing and do your own audits with people who care. Then get certified by an external auditor for the paper.

You can start very lightweight with doing spec driven development with the help of AI if you're at a size where you can't afford that. It's better than nothing.

But the important part is you, as a company, should inherently care.

If you rely on an auditor feedback loop to get compliant you've already lost.


This function exists in every publicly traded public company, and is called internal audit.

It has the potential to be incredibly impactful, but often devolves into box ticking (like many compliance functions).

And it's really hard to find technical people to do the work, as it's generally perceived as a cost centre so tends not to get budget.


Nobody really tries to get technical people to do the work.

Like cool, it's a great idea and would potentially produce positive results if done well, but the roles pay half the engineering roles, and the interviews are stacked towards compliance frameworks.

There's very little ability to fix a large public company when HR is involved


Maybe it should be treated like on-call duty and have the load spread between existing engineers on some kind of schedule, maybe with some extra comp as incentive because it's boring and will take more effort/time in the "easy case" compared to pager duty.

Speaking as a technical (data) person currently working in internal audit for a not quite public company, it's not entirely uncommon.

I do agree that the pay isn't great, but it's the fact that it's considered a cost centre that's been the issue for me.


Everything except for sales tends to be seen as a cost centre. It's ridiculous.

To be honest, I would even go further: if you think certification equals security, you are even more lost.

So many controls are dubious, sometimes even actively harmful for some set-ups/situations.

And even moreso, it's also perfectly feasible to pass the gates with a burning pile of trash.


And they do not track the industry at all, at best they'll help you win the war of five years ago.

Imagine my face when I had to take periodic backups of stateless, immutable read-only filesystem, non-root containers for "compliance".

Maybe that's just a goid moment to review your _policy_. About a half of our compute is exactly that, and we just don't have to do this sort of backups, that'd be silly.

We don't deal with the military though, only fintech (prime brokers and major banks, funds) some government. Plenty of certifications (have someone all site all year round),!no silliness.


That's hilarious :)

Ook goeiemorgen...


But companies don't care. They don't want compliance for feel goods, they want compliance because their partners require it. They do the minimum amount required to check the box

Caring about security and comparing about some of the arbitrary hoops you have to jump through for some of these compliance regimes don’t always overlap as much as you’d expect.

I’ve been at companies where we cared deeply about security, but certain compliance things felt like gimmicks on the side. We absolutely wanted to to do the minimum required to check that box so we could get back to the real work.


Sounds like reachability problem in Petri nets to me?

E cigarettes work by shorting the battery releasing a lot of instantenous heat. Their safety controller firmware are often of ... Dubious quality. It can happen quite often that the cigarette doesn't stop shorting the battery and catch fire as a result.

Making fire is literally their function unlike a laptop.

Combine that with basically unregulated and semi illegal supply chain and it becomes a recipe for disaster


AFAIK that's not really true, at least of modern vapes. Their function is not to "make fire", it's to heat a metal coil to a specific temperature at which propylene glycol and vegetable glycerin will aerosolize which is far lower than ignition temperature. Most modern vapes also use controllers with a feedback loop that pulse power through the coil hundreds of times per second to maintain the ideal temperature and desired power throughput. That being said there are definitely crappy and diy devices that unsafely dump huge current through devices but AFAIK modern devices generally don't do this because it's a bad user experience (burnt taste, too hot, ruined cotton absorber, etc) -- regulating the power is what users want so it's what devices do.

A snapshot of your build folder. Not even the sources. This is my other problem with mainstream Distros. Extending them is completely opaque. NixOS is source based and anything and everything can be updated by the user. Need some patch from kernel ML? 1 line of code. Need a Bugfix in your IDE that hasn't landed in a release? 1 line of code.

There is no distinction between package maintainers and end users. They have the same power.

In the meantime i dont expect Debian users to ever write a package themselves or to modify one.

In nixOS you do it all the time


FWIW... I have modified packages on Fedora and installed them. The workflow is very simple... of course, not as simple as NixOS but here goes:

# clone the package definition

$ fedpkg clone -a <name of package>

$ cd <name of package>

# install build dependencies

$ sudo dnf builddep ./nameofpackage.spec

# Now add your patches or modifications

# now build the package locally

$ fedpkg local

# install locally modified package

$ sudo dnf install ./the-locally-built-package.rpm


Arch Linux also has a long history of people writing their own package specs (AUR) and is relatively simple too of course.

Let me put it differently. The documentation of NixOS treats package maintainers and users as kind of equal.

This has benefits and downsides. Benefit is that everyone is treated as a power user. Downside is that power users are horrible at writing docs and this philosophy is my main theory why NixOS docs are so .... Bad

Fedora (and RHEL) end user and developer docs are written for quite different audiences


Yes I just replied to your other comment with the same observation. It reminds me of an article by Paul Graham, I forget which, who expressed the difficulty of explaining to programmers who lack an abstraction just how good the abstraction is. Anything you can do with NixOS, you can do with any distribution, because it isn't magic. But somehow, more stuff becomes possible because it gives you a better way to think.

(As for why the docs are so bad, I think it's because of the lack of good canonical documentation. There's too many copies of it. Search engines ignore the canonical version because it's presented as one giant document. Parts of the system aren't documented at all and you have to work out what you've got by reading the code. The result is that you have no idea what to do if you want to improve the situation - it seems like your best option is to create new documentation. And now you have the same basic level of documentation that didn't help the first hundred times it was rewritten. And I don't really think submitting a PR to nixpkgs is exactly userfriendly, so it probably discourages people from doing the "I'm just trying to understand this, so I'll fix up the documentation as I learn something" thing.)


Bye bye getting automatic upgrades to that package.

yes i think you've hit the nail on the head. I tend to view NixOS not as a distribution, but as a distribution framework. The system configuration is the sources for an immutable distribution as much as it as system configuration.

You're in no way bound by decisions of the nixpkgs contributors: as you say, we can add a patch. Or we can also decide we totally disapprove of the way they've configured such-and-such a service and write our own systemd service to run it.

Anyone can write a local debian package which adds a patch, and build and install it. And anyone can write a systemd service and use it instead of the distribution's systemd service. But on NixOS, these are equal to the rest of the system rather than outside it. Nixpkgs is just a library which your configuration uses to build a system.


I've been trying to get nixd LSP to work with Claude Code but I got stuck as they gatekeep it behind their "plugin" system and you can't just configure it in settings.json to point to a nix store path like mcps :(

My usual solution is to just clone whatever I need. e.g. in this case just clone nixpkgs and put in your instructions that it should do a git pull to make sure it's up to date and then refer to that whenever doing anything with nix. Agents are really good at using grep to explore repos even for something completely internal. Then you don't need any config or special tools. e.g. for work I just have a directory with like 30 repos cloned and my base AGENTS file I use says to refer to either them or live system state for ground truth. I basically never encounter hallucinations.

Same goes for the harness itself. Want to know how Codex works or whether it can be configured in some way? Clone it and ask it to look at itself.


There’s a NixOS MCP, it’s pretty good

I think you're confusing gender and sexual orientation. He's calling her a lesbian


No, I'm not. He also posted about her deep voice and people should check what genitals she really has.


Also doesn't this mean I have to reconfigure all my tools to use HTTP and then when I forget to enable this it will fall back to getting MITM'd by the Internet? Fails open in the most insecure method ever


But it hasn't been built exclusively for that use case. It's literally the same.


That and that the built in sandbox in Claude Code is bad (read only access to everything by default) and tightly coupled (cant modify it or swap it out).


That is also Linux VM on MacOS. They're not MacOS containers.. So it's completely pointless / useless for MacOS or iOS development


Oh, yes. I thought GP was mostly worried about shared VM problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: