Hacker Newsnew | past | comments | ask | show | jobs | submit | SCdF's commentslogin

Another way of phrasing this though, is that it's in the team's power to determine process (or the lack thereof).

Regardless of success or failure you can say to what degree this is true, and to me this is really that only part of "agile" that is worth locking in.


Ironically the only place I encounter this is using google news, where news sites seem to detect you're in google news (I don't think these same sites do it when I'm just browing normally?), and try to upsell you their other stories before you go back to the main page.

After mucking around with various easy to use options my lack of trust[1] pushed me into a more-complicated-but-at-least-under-my-control-option: syncthing+restic+s3 compatible cloud provider.

Basically it works like this:

- I have syncthing moving files between all my devices. The larger the device, the more stuff I move there[2]. My phone only has my keepass file and a few other docs, my gaming PC has that plus all of my photos and music, etc.

- All of this ends up on a raspberry pi with a connected USB harddrive, which has everything on it. Why yes, that is very shoddy and short term! The pi is mirrored on my gaming PC though, which is awake once every day or two, so if it completely breaks I still have everything locally.

- Nightly a restic job runs, which backs up everything on the pi to an s3 compatible cloud[3], and cleans out old snapshots (30 days, 52 weeks, 60 months, then yearly)

- Yearly I test restoring a random backup, both on the pi, and on another device, to make sure there is no required knowledge stuck on there.

This is was somewhat of a pain to setup, but since the pi is never off it just ticks along, and I check it periodically to make sure nothing has broken.

[1] there is always weirdness with these tools. They don't sync how you think, or when you actually want to restore it takes forever, or they are stuck in perpetual sync cycles

[2] I sync multiple directories, broadly "very small", "small", "dumping ground", and "media", from smallest to largest.

[3] Currently Wasabi, but it really doens't matter. Restic encrypts client side, you just need to trust the provider enough that they don't completely collapse at the same time that you need backups.


I also have a lil script that rolls dice on restic snapshot, then lists files and picks a random set to restore to /dev/null.

I still trust restic checksums will actually check whether restore is correct, but that way random part of storage gets tested every so often in case some old pack file gets damaged


That's a really good idea, imma steal it.

We need to talk about The Cone of Backups(tm), which you and I seem to have separately derived!

Props for getting this implemented and seemingly trusted... I wish there was an easier way to handle some of this stuff (eg: tiny secure key material => hot syncthing => "live" git files => warm docs and photos => cold bulk movies, isos, etc)... along with selective "on demand pass through browse/fetch/cache"

They all have different policy, size, cost, technical details, and overall SLA/quality tradeoffs.


Does syncthing work yet?

~ 5 years ago, I had a development flow that involved a large source tree (1-10K files, including build output) that was syncthing-ed over a residential network connection to some k8s stuff.

Desyncs/corruptions happened constantly, even though it was a one-way send.

I've never had similar issues with rsync or unison (well, I have in unison, but that's two-way sync, and it always prompted to ask for help by design).

Anyway, my decade-old synology is dying, so I'm setting up a replacement. For other reasons (mostly a decade of systemd / pulse audio finding novel ways to ruin my day, and not really understanding how to restore my synology backups), I've jumped ship over to FreeBSD. I've heard good things about using zfs to get:

saniod + syncoid -> zfs send -> zfs recv -> restic

In the absence of ZFS, I'd do:

rsync -> restic

Or:

unison <-> unison -> restic.

So, similar to what you've landed on, but with one size tier. I have docker containers that the phone talks to for stuff like calendars, and just have the source of the backup flow host my git repos.

One thing to do no matter what:

Write at least 100,000 files to the source then restore from backup (/ on a linux VM is great for this). Run rsync in dry run / checksum mode on the two trees. Confirm the metadata + contents match on both sides. I haven't gotten around to this yet with the flow I just proposed. Almost all consumer backup tools fail this test. Comments here suggest backblaze's consumer offering fails it badly. I'm using B2, but I haven't scrubbed my backup sets in a while. I get the impression it has much higher consistency / durability.


I've personally had no major issues with syncthing, it just works in the background, the largest folder I have synced is ~6TB and 200k files which is mirroring a backup I have on a large external.

One particular issue I've encountered is that syncthing 2.x does not work well for systems w/o an SSD due to the storage backend switching to sqlite which doesn't perform as well as leveldb on HDDs, the scans of the 6TB folder was taking an excessively long time to complete compared to 1.x using leveldb. I haven't encountered any issues with mixing the use of 1.x and 2.x in my setup. The only other issues I've encountered are usually related to filename incompatibilites between filesystems.


I will say I specifically don't sync git repos (they are just local and pushed to github, which I consider good enough for now), and I am aware that syncthing is one more of those tools that does not work well with git.

syncthing is not perfect, and can get into weird states if you add and remove devices from it for example, but for my case it is I think the best option.


Anecdotally, I've been managing a Syncthing network with a file count in the ~200k range, everything synced bidirectionally across a few dozen (Windows) computers, for 9 years now; I've never seen data loss where Syncthing was at fault.

Good to know. I wonder what the difference is. We were doing things like running go build inside the source directory. Maybe it can't handle write races well on Linux/MacOS?

This is very silly. It's just a combination of that Derren Brown astrology experiment [1] and madlibs.

[1] https://www.youtube.com/watch?v=haP7Ys9ocTk


If you're interested, Community Fibre is a yes from this website


The US didn't refill it's own strategic oil reserve before it attacked and raised its own oil prices, there is no foreseeable exit strategy where Iran doesn't now effectively own and charge usage for the straight, and Russia (and Iran but I digress) are now more able to sell their oil than before, bolstering their economy and helping them continue to attack Ukraine.


Yeah, I couldn't be bothered getting my accurate chest strap out, but my watch (which is generally very close to the strap) was anywhere from 10-20 off what it was reporting. This is sitting down, 30min after a run.


tbf they have been saying they've started doing this since December, so we're only a few months in. And like most software it's an iceberg: 99% of work on not observable by users, and in spotify's case listeners are only one of presumably dozens of different users. For all we know they are shipping massive improvements to eg billing


If it indicates, culturally in the current zeitgeist, that an AI wrote it, it becomes a bad structure.


I'd prefer we look to other trust metrics, rather than change for the sake of other's interpretation of who we are.


Other metrics:

Comment 1: 2026-02-18T23:45:12 1771458312 https://news.ycombinator.com/item?id=47067991

Comment 2: 2026-02-18T23:45:32 1771458332 https://news.ycombinator.com/item?id=47067994

20 seconds between comments.

--

or:

Comment 1: 2026-02-18T20:06:49 1771445209 https://news.ycombinator.com/item?id=47065649

Comment 2: 2026-02-18T20:07:12 1771445232 https://news.ycombinator.com/item?id=47065653

23 seconds between comments.

---

It's a bot.


Agreed, and were gonna see this everywhere that AI can touch. Our filter functions for books, video, music, etc are all now broken. And worst of all that breaking coincides with an avalanche of slop, making detection even harder.

There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.

It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.


My prediction is that reputation will be increasingly important, certain credentials and institutions will have tremendous value and influence. Normal people will have a hard time breaking out of their community, and success will look like acquiring the right credentials to appear in the trusted places.


That's been the trajectory for at least the last 100 years, an endless procession of certifications. Just like you can no longer get a decent-paying blue collar job without at least an HS diploma or equivalent, the days of working in tech without a university education are drying up and have been doing so for a while now.


The recent past was a nice respite from a strict caste system, but I guess we’re going back.


I think the recent past was a respite in very specific contexts like software maybe. Others, like most blue collar jobs, were always more of an apprentice system. And, still others, like many branches of engineering, largely required degrees.


This isn't new- it's been happening for decades.


Not new. No. But will be more.


Maybe my expensive university degree was worth it after all


I have a sci-fi series I've followed religiously for probably 10 years now. It's called the 'Undying Mercenaries' series. The author is prolific, like he's been putting out a book in this series every 6 months since 2011. I'm sure he has used ghost writers in the past, but the books were always generally a good time.

Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.

I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.


Yes, I have not bought a few books after reading their free chapters and getting suspicious.

Honestly: there is SO much media, certainly for entertainment. I may just pretend nothing after 2022 exists.


When I do YouTube searches I tend to limit the search to video’s prior to 2022 for this reason.


If you've some time to burn, write the author and/or his publisher and let them know that the guy's new ghostwriter sucks shit. If this is very seriously making your consider not picking up the next book in the series, be sure to mention that.

If folks just stop purchasing the new books, they can imagine a reason for the lost sales that's convenient for them, but if folks tell them why they stopped purchasing, there's a lot less room for that kind of nonsense.


That’s a good idea actually. It’s easy to forget this kind of thing is an option.


People will build AI 'quality detectors' to sort and filter the slop. The problem is of course it won't work very well and will drown all the human channels that are trying to curate various genres. I'm not optimistic about things not all turning into a grey sludge of similar mediocre material everywhere.


Is there a way to have a social media platform with hand-written letters, sent with ravens? That's AI proofed... for a while at least!


I worry that the focus on AI proofing will lead to a deanonymization of the internet. If we force every interaction to be associated with a real world id, we can kill a lot of the bots.


> deanonymization of the internet

This is going to happen anyway because 'the powerful' want to track what everyone does and says. AI is going to accelerate this because now they have a much more efficient means to filter and identify the people that are doing and saying things that the powerful don't like. The powerful will also be able to get real world ID credentials for their bots if they wanted or needed them, so this will not stop the problem of bots.


Exactly, and we will have those who will "game" the "detectors" like they already "game" the social media "algorithms" :\


I believe that a history of written work verified by stylometry will be a viable reputation system.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: