Personally I keep it syncing off TTRSS for filtering and automatic actioning on certain feed entries, but that aint everyone's cup of tea. I'd like to think NNW at least covers most people's use cases whether standalone or relying off another service to aggregate.
I was a NNW user for years and it's why I eventually built my own news reader. NNW had a lot of great features and I wanted to mostly keep them. You might find that NewsBlur takes a similar path but with a different set of opinions.
RSS’ death is real - 15 years ago, almost every news site had a RSS feed, some had several ones. Today? RSS feed is rare.
So if you want to make news feed from news sites, you have to use parsing their html code, and ofc everybody has its own structure. JS powered sites are painful ones.
15 years ago, almost every news site had a RSS feed, some had several ones. Today? RSS feed is rare.
It may be a reflection of where you get your news.
New York Times, Washington Post, Wall Street Journal, Radio Free Europe, Mainichi, and lots of other legitimate primary source Big-J journalism news sites have RSS.
Rando McRepost's AI-Generated Rehash Blog? Not so much.
I don't know, I also only use RSS (with the exception of Reddit I think) so I would not even notice a website that a) provides content I want to get notified about and not actively visit for a reason and b) has no feed.
It is somehow less funny today but in the 90's we would say "is there something wrong with your hands?"
A truly funny story: I wrote an rss aggregator and one day I discover some feeds had died without me noticing it. I looked at the feed, it was gone, I look at my aggregate and the headlines were all there?!?!
Since I gather a lot of feeds I couldn't help but noticed that a very large amount isn't wellformed. For example, in xml attributes the & (in urls) is suppose to be &, if you do that however many aggregators won't be able to parse it.
Every other month I wrote little bits of code to address the most annoying issues.
1) if I cant find a <link> or <guide> etc I eventually just gather <a>'s and take the href.
2) if I really cant find a title for the item I had it fail back on whatever is in the <a> since I was gathering those anyway.
3) if I cant even find an <item> I just look for the things that are suppose to go in the <item>
4) if I cant find a proper time stamp ill try parse one out of the url
5) if the urls are relative path complete them.
What was actually going on: The feed was gone, it redirected to the home page. In an attempt to parse the "xml" it eventually resorted to gathering the url and title from the <a>'s and build valid time stamps from the urls.
Mistral used to serve a feed actually up until 6ish months ago I guess? Their admin console used to be built with HTMX too which I found kinda interesting.
Now the news site and admin console is all in Next.js and slow and no feed.
You jogged my memory and I downloaded it, after a small detour trying to find a version that'll run on my 11YO iMac.
I have a whole collection of feeds alread, which I have no knowledge of at all. Many I've never even heard of. Is this a default thing, or was I accidentally bookmarking RSS feeds or something years ago and never knew?
Seriously. I've been updating NewsBlur with all the pet features people have wanted for years and I'm finding that it's even more enjoyable now with all those AI features built in. Daily briefing, ask AI, story clustering, all of these are AI-flavored improvements to RSS and it's so relaxing to open up my river of news and scroll through all the good stuff without feeling a gross algorithm surfacing endless outrage.
I read plenty of X as well as scroll through various social media apps and nothing comes close to how great RSS feels to read.
I (re)discovered RSS a few months ago via NetNewsWire, and it’s so calming and empowering to curate one’s own feed.
Rumors of RSS’ death are greatly exaggerated.