Hacker Newsnew | past | comments | ask | show | jobs | submit | jauer's commentslogin

at the same time you have endless stories of people losing family and friends to cancer because a doctor dismissed complaints as anxiety or needing to exercise more leading to cancer not being discovered until it was too late to treat.

The answer can't be to put our collective heads in the sand.


IPv4 continues to be available to entities that have a need that fits a particular policy shape, just most people don't. Specifically, you can get IPv4 /24s for IPv6 transition purposes. This includes anycast DNS, MX, etc for legacy clients on other networks, v4-side of CGNAT, etc.

E.g. I was able to get a /24 in the ARIN region in 2021 and could justify 2 more for a _logical_ network topology similar to what NK presents to the world.

APNIC similarly has a pool available for IPv4 allocations: https://www.apnic.net/manage-ip/ipv4-exhaustion/#the-situati...


IPv4 is a question of money in almost all cases at this point. You can get what you can pay for.

Trivially on their (and qnap's) amd64 systems at least. There are some quirks where they are more similar to an embedded system than a PC, but it's not a big deal. Things like console over UART (unless you add a UART) and fan control not working out of the box, so you set it to full speed in bios or mess with config.

Debian has docs on installing on at least one model of their arm boxes: https://wiki.debian.org/InstallingDebianOn/Synology

I run Debian on a few different models of qnap because their hardware occupies a niche of compact enclosure, low noise, and many drives.


There’s secret from an adversary and then there’s internal compartmentalization.

You could have 100s of people who have a business need to look at syslog from a router, but approximately nobody who should have access to login creds of administrative users and maybe 10s of people with access to automation role account creds.


TFA asserts that Git LFS is bad for several reasons including because proprietary with vendor lock-in which I don't think is fair to claim. GitHub provided an open client and server which negates that.

LFS does break disconnected/offline/sneakernet operations which wasn't mentioned and is not awesome, but those are niche workflows. It sounds like that would also be broken with promisors.

The `git partial clone` examples are cool!

The description of Large Object Promisors makes it sound like they take the client-side complexity in LFS, move it server-side, and then increases the complexity? Instead of the client uploading to a git server and to a LFS server it uploads to a git server which in turn uploads to an object store, but the client will download directly from the object store? Obviously different tradeoffs there. I'm curious how often people will get bit by uploading to public git servers which upload to hidden promisor remotes.


LFS is bad. The server implementations suck. It conflates object contents with the storage method. It's opt-in, in a terrible way - if you do the obvious thing you get tiny text files instead of the files you actually want.

I dunno if their solution is any better but it's fairly unarguable that LFS is bad.


It does seem like this proposal has exactly the same issue. Unless this new method blocks cloning when unable to access the promisors, you'll end up with similar problems of broken large files.


How so? This proposal doesn’t require you to run `git lfs install` to get the correct files…


If the architecture is irrelevant and it's just a matter of turning it on by default they could have done that with LFS long ago.


Git lfs can't do it by default because:

1. It is a separate tool that has to be installed separately from git

2. It works by using git filters and git hooks, which need to be set up locally.

Something built in to git doesn't have those problems.


But then they could have just taken the LFS plugin and made it a core part of git, if that were the only problems.


If it didn't have those problems, it wouldn't really be git lfs, it would be something else.


And what happens when an object is missing from the cloud storage or that storage has been migrated multiple times and someone turns down the old storage that’s needed for archival versions?


You obviously get errors in that case, which is not great.

But GP's point was that there is an entire other category of errors with git-lfs that are eliminated with this more native approach. Git-lfs allows you to get into an inconsistent state e.g. when you interrupt a git action that just doesn't happen with native git.


It's yet to be seen what it actually eliminates and what they're willing to actually enable by default.

The architecture does seem to still be in the general framing of "treat large files as special and host them differently." That is the crux of the problem in the first place.

I think it would shock no one to find that the official system also needs to be enabled and also falls back to a mode where it supports fetching and merging pointers without full file content.

I do hope all the UX problems will be fixed. I just don't see them going away naturally and we have to put our trust in the hope that the git maintainers will make enjoyable, seamless and safe commands.


I think maybe not storing large files in repo but managing those separately.

Mostly I did not run into such use case but in general I don’t see any upsides trying to shove some big files together with code within repositories.


That is a complete no-go for many use cases. Large files can have exactly the same use cases as your code: you need to branch them, you need to know when and why they changed, you need to check how an old build with an old version of the large file worked, etc. Just because code tends to be small doesn't mean that all source files for a real program are going to be small too.


Yeah but GIT is not the tool for that.

That is why I don’t understand why people „need to use GIT”.

You still can make something else like keeping versions and keeping track of those versions in many different ways.

You can store a reference in repo like a link or whatever.


A version control system is a tool for managing a project, not exclusively a tool for managing source code.

Wanting to split up the project into multiple storage spaces is inherently hostile to managing the project. People want it together because it's important that it stays together as a basic function of managing a project of digital files. The need to track and maintain digital version numbers and linking them to release numbers and build plans is just a requirement.

That's what actual, real projects demand. Any projects that involve digital assets is going to involve binary, often large, data files. Any projects that involve large tables of pre-determined or historic data will involve large files that may be text or binary which contain data the project requires. They won't have everything encompassed by the project as a text file. It's weird when that's true for a project. It's a unique situation to the Linux kernel because it, somewhat uniquely, doesn't have graphics or large, predetermined data blocks. Well, not all projects that need to be managed by git share 100% of the attributes of the Linux kernel.

This idea that everything in a git project must be all small text file is incredibly bizarre. Are you making a video game? A website? A web application? A data driven API? Does it have geographic data? Does it required images? Video? Music or sound? Are you providing static documentation that must be included?

So the choices are:

1. Git is useful general purpose VCS for real world projects. 2. Git does not permit binary or large files.

Tracking versioning on large files is not some massively complex problem. Not needing to care about diffing and merging simplifies how those files are managed.


That’s what I disagree with. For me Git is for managing source code. Everything else is trying to fit square peg through round hole.

There are other tools for managing projects and better ways to version large files or binary assets.

Git is great at handling text changes and that’s it. It sucks with binary blobs.


Git is an scm, not a vcs. By design.


> Yeah but GIT is not the tool for that.

Yes because Git currently is not good at tracking large file. That's not some fundamental property of Git; it can be improved.

Btw it isn't GIT.


I care to disagree on it being an improvement or not being good at tracking large files being flaw of Git.


ok, but does it affect you if it also addresses other people's use-cases?


The important point is that you don't want two separate histories. Maybe if your use case is very heavy on large files, you can choose a different SCM, which is better at this use case (SVN, Perforce). But using different SCMs for different files is a recipe for disaster.


Git is the right tool. It's just bad at this job.


That's pretty much what git LFS is...


Another way that LFS is bad, as I recently discovered, is that the migration will pollute the `.gitattributes` of ancestor commits that do not contain the LFS objects.

In other words, if you migrate a repo that has commits A->B->C, and C adds the large files, then commits A & B will gain a `.gitattributes` referring to the large files that do not exist in A & B.

This is because the migration function will carry its ~gitattributes structure backwards as it walks the history, for caching purposes, and not cross-reference it against the current commit.


That doesn't sound right. There's no way it's adding a file to previous commits, that would change the hash and thereby break a lot of things.


`git lfs migrate ` rewrites the commits to convert large files in the repo to/from LFS pointers, so yes it does change the hashes. That's a well-documented effect.

https://github.com/git-lfs/git-lfs/blob/main/docs/man/git-lf...

Now, granted, usually people run migrate to only convert new local commits, so by nature of the ref include/exclude system it will not touch older commits. But in my case I was converting an entire repo into one using LFS. I hoped it would preserve those commits in a base branch that didn't contain large files, but my disappointment was said .gitattributes pollution.


From the documentation, like 2 paragraphs in:

> In all modes, by default git lfs migrate operates only on the currently checked-out branch, and only on files (of any size and type) added in commits which do not exist on any remote. Multiple options are available to override these defaults.

Were your remotes not configured correctly?


Let me repeat myself:

> But in my case I was converting an entire repo into one using LFS.

then check out the section in the manual "INCLUDE AND EXCLUDE REFERENCES"


OK, and your main complaint was that it added .gitattributes to all previous commits? What if someone were to go back and add a .bin to the earlier commits, you would still want it in LFS, right? I'm not sure what "cross referencing to the current commit" would mean in that case. I don't see why you would want to use the .gitattributes from a different branch, like main or something. It seems very un-git-like for an operation to reference another branch without being explicitly told to do so.

But anyway, yes, LFS rewrites history if you want to apply it to history. I agree it's sub-par; it's disruptive and risks breaking links to specific githashes.


The issue is the migration is unlike starting to use it on a repo, because the metadata propagates 'backwards in time' instead of reflecting what is actually in the repo at that commit.


> LFS does break disconnected/offline/sneakernet operations which wasn't mentioned and is not awesome

Yea, I had the same thought. And TBD on large object promisors.

Git annex is somewhat more decentralized as it can track the presence of large files across different remotes. And it can pull large files from filesystem repos such as USB drives. The downside is that it's much more complicated and difficult to use. Some code forges used to support it, but support has since been dropped.


Git LFS didn't work with SSH, you had to get an SSL cert which github knew was a barrier for people self hosting at home. I think gitlab got it patched for SSH finally though.


letsencrypt launched 3 years before git-lfs


That's already a domain name and a more complicated setup without a public static IP in home environments, and in corporate environments now you're dealing with a whole process etc. that might be easier to get through by.. paying out for github LFS.

I think it is a much bigger barrier than ssh and have seen it be one on short timeline projects where it's getting set up for the first time and they just end up paying github crazy per GB costs, or rat nests of tunnels vpn configurations for different repos to keep remote access with encryption with a whole lot more trouble than just an ssh path.


Letsencrypt was founded 2012, but become available in the wild December 2015. git-lfs mid-2014. So same era in general.


You're right, I had the wrong date for LFS on GitHub.


They don't sell them. But, if the developer / hotelier had a sufficiently large network, think providing service equivalent to the number of rooms at a US state university system network (multiple universities), then they might qualify: https://openconnect.netflix.com/en/


There are plenty of hotel groups big enough for that, but their properties are geographically distributed and I can't imagine they'd benefit from running fibre for their own multi-site network. Better to just connect each property to a local ISP like everyone else.

Maybe there are some exceptions. Disney World? MGM Resorts in Las Vegas?


Reuters builds software for a variety of fields and maintains datasets that would be useful in identifying if, say, an email with an invoice purporting to be from a specific company aligns with the invoicing practices of that company.

It would be more accurate to compare that side of Reuters to LexisNexus, Wolters Kluwer, or perhaps Bloomberg.


Apparently Experian does a far much better job giving credit scores to Americans than Thomas Reuters does against criminal immigrants.


I'm curious how well this article resonates with people outside a particular bubble (vs. being puzzling if you are inside a different bubble.)

The statement that Anduril sponsoring a NixOS conference was inherently damaging as opposed to the reaction causing the damage, "When did defense work stop being taboo" etc.

I've worked in the US Midwest->SFBay->US West and defense work never seemed particularly taboo in my circles, moreso that the work was boring and constricting.

Traditionally cautious sectors adopting a particular technology seems like a sign that a technology is viewed as having a particular level of dependability. That's a good thing.


I think the fact that Anduril in particular is involved is relevant because Palmer Luckey and the whole Thiel company orbit around it are extremely divisive and there's a military / civil divide along political axis in the US. Here in Europe that's usually not the case and Helsing being a European company in particular now with the security situation on the continent just isn't going to cause much furor.


The irony is that Luckey and Musk, despite their personal issues and divisiveness, are some of the better defense contractors in terms of actually providing good value for dollar and getting things done on time. Compare against, say, Boeing.

I suggest that the Europeans should get over their moral reservations about military industries quickly because the upcoming US administration is not likely to be as helpful as previous ones in the event that Russia decides to test the integrity of NATO.


> I suggest that the Europeans should get over their moral reservations about military industries quickly

Which of us Europeans are you referring to exactly?

Sweden joined NATO and many countries in the bloc have increased spending. In the Netherlands we sent fighter jets to Ukraine to try and help in the war against Ukraine.

This comment is just downright ignorant and condescending. I guess this is how Trump voters view Europe though?


I agree - and what's funny is that according to this blog it was the US community that rejected US MIC companies, and the EU community didn't reject the EU MIC company.


We're commenting on a long essay about making tech conferences hostile to any kind of defense contractor presence, that prefaced itself with a "content warning" simply because a handful of defense contractors were mentioned. That kind.

That obviously doesn't represent most Europeans, and of course there are many Americans that hold similar views. But I do also think it's true that Europe still hasn't really "woken up" to the scale of the problem on their hands.

On spending, most nations that don't directly border Russia are only barely meeting the goals they set forth a decade ago and they're doing so at the last possible moment, to say nothing of the complete inadequacy of that goal given the largest war since WWII is now happening at their doorstep.


This is one of those "the internet isn't reality, and it is self selecting" issues.

Depending on the day and topic, a lot of things look all one way depending on who's commenting on them.


"When did defense work stop being taboo" etc.

There's a good quote in the Economist story on autonomous drones that's also linked from the front page [1]. The idea that you can ethically shun defense work is itself a privilege and a luxury that many people throughout the world don't enjoy.

“It’s the best feeling to see your drone enter a tiny opening in an enemy trench,” says Denys, an engineer at The Fourth Law, the Ukrainian firm which makes these autonomous drones. “I used to be a pacifist, but Russia’s war has stripped me of that privilege.”

As long as there are countries like Russia, there will have to be a strong defense industry. The leaders of such countries understand nothing but violence, so unfortunately, violence it is.

1: https://news.ycombinator.com/item?id=42352871


deleted


I've never worked in defense. Why do you equate working in those regions with working in defense?


I clearly got it wrong.


[flagged]


Note that a pathological kind of "social justice" that alienates a bunch of people who the ingroup considers irredeemable is simply known as sociopathy.


> Because, for whatever reason I’ve yet to grasp, homelab folks like to implement Tailscale as some sort of “secure virtual network” abstraction layer - think something similar to zScaler ZPA - on top of their local LAN.

This is Tailscale's intended behavior, not a matter of how homelab folks like to implement it: https://github.com/tailscale/tailscale/issues/659#issuecomme...


This is why I (thought I) prefaced my gripe with the context of date and documentation. Looking at modern docs, yeah, it absolutely looks like it’s trying to be a Freemium alternative to something like zScaler but on top of Wireguard (virtual secure network), but the OP’s article still makes me bristle because it demonstrates the lack of knowledge of the implications of that deployment model.

Case in point is that their grievance is about SMB to their NAS being routed over Tailscale despite being on the same network as the SMB endpoints. Ideally this is something that should’ve come up during the architecture phase of deployment: how should traffic be handled when both machines share the same network? When should Tailscale’s routing table prefer the local adapter over the Tailscale adapter? If Tailscale cannot be configured to advertise a specific link speed that accurately reflects network conditions, how can we apply policies to the endpoints to route traffic correctly?

I admittedly used this article as a personal soapbox to yell at (software) folks to get out of my lane (IT), and that was a fault of mine; I should’ve taken more time to articulate the pitfalls of these sorts of rapid deployments homelabs can facilitate, and share my expertise from my field with others instead of grandstanding. That’s on me.


Maybe I'm not understanding properly, but why can't my device ARP ping and handshake with the subnet router to determine that I'm on the local subnet and to stop routing it through Tailscale?


Tailscale intentionally overrides your device's routing table to force traffic between hosts in the same subnet to go over a Wireguard tunnel instead of bypassing it. They do this because they believe that the presumption that a local subnet is trustworthy is false.


It could, but the Tailscale devs don't consider "silently start leaking traffic to anyone on the local subnet" to be a desirable feature.


Could you elaborate on what you find “vile and disgusting“ about that meme?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: