Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’d like to use this space to praise everyone involved in creating and keeping NetNewsWire alive.

I (re)discovered RSS a few months ago via NetNewsWire, and it’s so calming and empowering to curate one’s own feed.

Rumors of RSS’ death are greatly exaggerated.



Definitely my favorite mobile RSS app.

Personally I keep it syncing off TTRSS for filtering and automatic actioning on certain feed entries, but that aint everyone's cup of tea. I'd like to think NNW at least covers most people's use cases whether standalone or relying off another service to aggregate.


NetNewsWire is SO good - both the macOS and iPhone apps. Real labor of love. We are very lucky to have it.


I agree. I'm just sad that, since I'll personally never upgrade to Liquid (Gl)ass, they stopped updating NetNewsWire for macOS versions before Tahoe.


I believe they just finished or are in beta to release the new version to older OS.


Wow, I've checked and that's true. I'm supper happy, didn't expect that. Thanks!


There is a new version for Sonoma and Sequoia! I am so happy.


I was a NNW user for years and it's why I eventually built my own news reader. NNW had a lot of great features and I wanted to mostly keep them. You might find that NewsBlur takes a similar path but with a different set of opinions.


RSS’ death is real - 15 years ago, almost every news site had a RSS feed, some had several ones. Today? RSS feed is rare.

So if you want to make news feed from news sites, you have to use parsing their html code, and ofc everybody has its own structure. JS powered sites are painful ones.


15 years ago, almost every news site had a RSS feed, some had several ones. Today? RSS feed is rare.

It may be a reflection of where you get your news.

New York Times, Washington Post, Wall Street Journal, Radio Free Europe, Mainichi, and lots of other legitimate primary source Big-J journalism news sites have RSS.

Rando McRepost's AI-Generated Rehash Blog? Not so much.


I don't know, I also only use RSS (with the exception of Reddit I think) so I would not even notice a website that a) provides content I want to get notified about and not actively visit for a reason and b) has no feed.


Reddit also has RSS feeds, add `.rss` to urls.


There are feeds of everything. You just have to look harder.

edit: provide an example please



Uh, they lie about everything?

https://www.abc.net.au/news/feed/51120/rss.xml

I haven't fully examined it but looking at the xml I see it was last build in 2026 and a headline about Women's Asian Cup 2026.

abc.net.au/news/2026-03-05/matildas-iran-asian-cup-quick-hits-hayley-raso-mary-fowler/106413886


Oh that's wild. I guess the system is just on autopilot and nobody knew how to actually act on their policy change.


It's all about licensing sadly...


It is somehow less funny today but in the 90's we would say "is there something wrong with your hands?"

A truly funny story: I wrote an rss aggregator and one day I discover some feeds had died without me noticing it. I looked at the feed, it was gone, I look at my aggregate and the headlines were all there?!?!

Since I gather a lot of feeds I couldn't help but noticed that a very large amount isn't wellformed. For example, in xml attributes the & (in urls) is suppose to be &, if you do that however many aggregators won't be able to parse it.

Every other month I wrote little bits of code to address the most annoying issues. 1) if I cant find a <link> or <guide> etc I eventually just gather <a>'s and take the href. 2) if I really cant find a title for the item I had it fail back on whatever is in the <a> since I was gathering those anyway. 3) if I cant even find an <item> I just look for the things that are suppose to go in the <item> 4) if I cant find a proper time stamp ill try parse one out of the url 5) if the urls are relative path complete them.

What was actually going on: The feed was gone, it redirected to the home page. In an attempt to parse the "xml" it eventually resorted to gathering the url and title from the <a>'s and build valid time stamps from the urls.


Not exactly a "news" site, but this is still an example site that you'd expect would have a feed:

https://mistral.ai/news/


Mistral used to serve a feed actually up until 6ish months ago I guess? Their admin console used to be built with HTMX too which I found kinda interesting.

Now the news site and admin console is all in Next.js and slow and no feed.



You jogged my memory and I downloaded it, after a small detour trying to find a version that'll run on my 11YO iMac.

I have a whole collection of feeds alread, which I have no knowledge of at all. Many I've never even heard of. Is this a default thing, or was I accidentally bookmarking RSS feeds or something years ago and never knew?


Love it, also shoutout to NewsFire from the days of yore.

https://newsfirex.com

Just look at it, NNW is still using the same great design.


Seriously. I've been updating NewsBlur with all the pet features people have wanted for years and I'm finding that it's even more enjoyable now with all those AI features built in. Daily briefing, ask AI, story clustering, all of these are AI-flavored improvements to RSS and it's so relaxing to open up my river of news and scroll through all the good stuff without feeling a gross algorithm surfacing endless outrage.

I read plenty of X as well as scroll through various social media apps and nothing comes close to how great RSS feels to read.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: