Largely agreed, with one exception. If you're ever in Boston/Cambridge MA, check out the MIT museum. I've always told people that its a science museum but for adults. The Harvard museums are worth visiting as well, but the MIT museum really impressed me with their content.
The MIT museum isn't very good. It is a science museum for adults, but it is too passive an experience for the patron. I recommend the Exploratorium in San Francisco instead as the science museum for adults.
I've only been to a play (staged reading) at the new one but, in general, I'm not sure how interested most adults are in interactivity. I've been to the Exploratorium for an event and it was fun. (Having those sort of distractions are nice when you're tired of feeling like you need to speak to people at an event.) But not sure I'd have made a trip there otherwise.
Similar, My major in university was computer engineering (as opposed to pure CS or EE) because I wasn't sure if I wanted to go into a hardware or software profession. I ended up going with software since all the interesting opportunities were there, whereas most jobs in hardware seemed to be working for stodgy old companies barely making six figures if I was lucky.
It's made me one of the only leaders in my Software Org that actually knows what happens below the level of the instruction set. I can talk about the power and heat implications of algorithmic decisions. But mostly nobody cares, theres always enough money to buy more servers.
I feel like I see this attitude a lot amongst devs: "If everyone just built it correctly, we wouldn't need these bandaids"
To me, it feels similar to "If everyone just cooperated perfectly and helped each other out, we wouldn't need laws/money/government/religion/etc."
Yes, you're probably right, but no that won't happen the way you want to, because we are part of a complex system, and everyone has their very different incentives.
Semantic web was a standard suggested by Google, but unless every browser got on board to break web pages that didn't conform to that standard, then people aren't going to fully follow it. Instead, browsers (correctly in my view) decided to be as flexible as possible to render pages in a best-effort way, because everyone had a slightly different way to build web pages.
I feel like people get too stuck on the "correct" way to do things, but the reality of computers, as is the reality of everything, is that there are lots of different ways to do things, and we need to have systems that are comfortable with handling that.
Was this written by AI? I find it hard to believe anyone who was interested in Semantic Web would have not known it's origin (or at least that it's origin was not Google).
The concept of a Semantic web was proposed by Tim Berners-Lee (who hopefully everyone recognizes as the father of HTTP, WWW, HTML) in 1999 [0]. Google, to my knowledge, had no direct development or even involvement in the early Semweb standards such as RDF [1] and OWL [2]. I worked with some of the people involved in the latter (not closely though), and at the time Google was still quite small.
That was a human-generated hallucination, my apologies. I always associated semantic web with something Google was pushing to assist with web crawling, and my first exposure to it was during the Web 2.0 era (early 2010s) as HTML5 was seeing adoption, and I always associated it with Google trying to enhance the web as the application platform of the future.
W3C of course deserves credit for their hard work on this standard.
My main point was that regardless of the semantic "standard", nothing prevented us from putting everything in a generic div, so complaining that everyone's just "not on board" isn't a useful lament.
Google acquired Metaweb Technologies in 2010, acquiring Freebase with it. Freebase was a semantic web knowledge base and this became deeply integrated into Google's search technology. They did, in fact, want to push semantic web attributes to make the web more indexable, even though they originated neither the bigger idea nor the original implementation.
(one of my classmates ended up as an engineer at Metaweb, then Google)
"I always associated semantic web with something Google was pushing to assist with web crawling, and my first exposure to it was during the Web 2.0 era (early 2010s) as HTML5 was seeing adoption, and I always associated it with Google trying to enhance the web as the application platform of the future."
This sounds more like "indexing" than "crawling"
The "Sitemaps 0.84" protocol , e.g., sitemap.xml, was another standard that _was_ introduced by Google
Helpful for crawling and other things
(I convert sitemap.xml to rss; I also use it to download multiple pages in a single TCP connection)
Not every site includes a sitemap, some do not even publish a robots.txt
The phrase “if everyone just” is an automatic trigger for me. Everyone is never going to just. A different solution to whatever the problem is will be necessary.
I can't find a copy of the old "reasons your solution to email spam won't work" response checklist, but one of the line items was "fails to account for human nature".
eh I feel this but it's a lot simpler than that. Not "if everyone built everything correctly" but "if everyone's work was even slightly better than complete garbage". I do not see many examples of companies building things that are not complete embarrassing shit. I worked at some companies and the things we built was complete embarrassing shit. The reasons are obvious: nobody cares internally to do it, and nobody externally has any standards, and the money still flows if you do a bad job so why do better?
What happens in practice is that the culture exterminates the drive for improvement: not only are things bad, but people look at you if you're crazy if you think things should be better. Like in 2025 people defend C, people defend Javascript, people write software without types, people write scripts in shell languages; debugging sometimes involves looking at actual bytes with your eyes; UIs are written in non-cross-platform ways; the same stupid software gets written over and over at a million companies, sending a large file to another person is still actually pretty hard; leaving comments on it is functionally impossible ... these are software problems, everything is shit, everything can be improved on, nothing should be hard anymore but everything still is; we are still missing a million abstractions that are necessary to make the world simple. Good lord, yesterday I spent two hours trying to resize a PDF. We truly live in the stone age; the only progress we've made is that there are now ads on every rock.
I really wish it was a a much more ruthlessly competitive landscape. One in which if your software is bad, slow, hard to debug, hard to extend, not open source, not modernized, not built on the right abstractions, hard to migrate on or off of, not receptive to feedback, covered in ads... you'd be brutally murdered by the competition in short order. Not like today where you can just lie on your marketing materials and nobody can really do anything because the competition is just as weak. People would do a much better job if they had to to survive.
> the money still flows if you do a bad job so why do better?
I'll raise. The money flows because you do a bad job. Doing a good job is costly and takes time. The money cannot invest that much time and resources. Investing time and resources builds an ordinary business. The money is in for the casino effect, for the bangs. Pull the arm and see if it sticks. If yes, good. Keep pulling the arm. If not, continue with another slot machine.
I would argue that the money is short termism though. It just assumes short term returns are the correct choice because it lacks the technical understanding of the long term benefits of a good job.
In my experience many acquisitions set the peak of a given software product. The money then makes the argument that its "good enough" and flogs the horse until its dead and a younger more agile team of developers eventually build a new product that makes it irrelevant. The only explanation I have for why so many applications fail to adapt is because of a cultural issue between the software and the money, that always gets resolved by the money winning.
For example I would suggest that the vast majority of desktop apps, especially those made by SMEs, originally in MFC or something fail to make the transition to online services that they need today because of this conflict. The company ends up dying and the money never works out what it did wrong because its much harder to appreciate those long term effects than the short term ones that gave them more money at the start.
Today is easy (finally we have Rust, Zig, Odin, Swift, Go etc that show marked improvements), but the op is correct: A lot of progress was stalled because people defend sub optimal tools.
Every time somebody suggest "but C is wrong" the answer was "You should not do the wrong, be careful every time and things will be fine!"
P.D: Yesterday we have pascal/ADA/effiel/ocalm and others, but the major issues is that C/JS should have be improved to remove the bad things, and add the good things. Is not as if what or why to do it was a mystery, it was just the fear and arrogance against change.
And this cause too much inertia against "reinvent the wheel" and too much hate against anybody that even try
He seems to have mistaken his personal opinions on which languages and language features are good for some objective truth.
Ironically, that’s part of why we can’t have nice things. People who aren’t open to other viewpoints and refuse to compromise when possible impede progress.
Well, sure, age is part of it. I would hope languages coming out 40-50 years after their predecessors (in the case of Rust following C/C++) would have improved upon those predecessors and learned from the ideas of computer science in the intermediate years.
(Coincidentally this is one of my chief complaints about Go: despite being about the same age as Rust, it took the avenue of discarding quite a lot of advancements in programming language theory and ergonomics since C)
Go has a much different set of design goals than Zig, Nim, or especially Rust. Go is really for people who want a few major improvements on C like easier builds, faster builds, higher-level standard string handling, direct support for some form of concurrency, an optional GC which defaults to on, and a few syntax tweaks - things that a modern C might have that were not going to make it into a standards revision. Rust, to support ownership and the borrow checker at compile time, had to build a useful language around that hugely helpful but quite restrictive requirement. They subsequently went different directions than the Go team on a lot of the other language features. Zig, Nim, and D are something in between those extremes in their own ways.
As someone with a background of a lot of time with Perl and the Pascal/Ada family who was rooting for Julia, Go strikes a good balance for me where I would have used Perl, Python, Julia, Ruby, or shell for system tasks. I haven’t done a lot of work in Rust yet, because in the infrastructure space speed to implement is usually more important than performance and Go is a lot faster already than Python or Ruby and because my team as a whole has much more experience with it. I certainly appreciate why the folks at work who do the performance-critical network and video software use mostly Rust to do what they used to do in C, Java, or C++.
we have to accept that the vast majority of people don't think like us. They don't think its complete garbage because they can't look hard enough to appreciate that fidelity.
While it might be better if everyone thought like us and wanted things to be _fundamentally_ good, most people don't, and most people money >> less people money and the difference in scale is vast.
We could try to build a little fief where we get to set the rules but network effects are against us. If anything our best shot is to try to reverse the centralisation of the internet because that's a big cause of enshittification.
The semantic web came out of work on Prolog and formal systems for AI which just didnt work well... LLMs and vector databases give us new tools that are pretty usable.
Neither broke web pages, honestly. XHTML requires a DTD named at the top of the document, and browsers will happily fall back to HTML 3, 4, or 5 as they can if there’s no DTD specified.
My interpretation of "break web pages" was serving XHTML with MIME type application/xhtml+xml, in which case browsers don't render anything when the XHTML isn't well-formed, which is really just a strict / validate-my-syntax mode you can opt into.
I think they have value as a discovery method and aggregator. If you know the hotel you want, yes you're better off going direct, but if you want to browse its nice to have.
First-party websites to book directly + better aggregators/search like ChatGPT are eroding this value pretty rapidly though. If they leaned into comprehensive trip planning they might have a shot of staying relevant.
>If you know the hotel you want, yes you're better off going direct, but if you want to browse its nice to have.
I see this in a bunch of the responses, that these sites are great for "discovery". I don't have a preference one way or another, but I'm wondering... why not, say, Google Maps? Go to the locale you want, search for hotels, voila. There's your discovery.
Booking is the only site I've ever found that allows you to search by "has bathtub". It's not always correct - they might have some rooms with tubs, but not all - but it's a damned sight better than random chance or visiting every single hotel website.
I honestly just use every tool I can when trip planning: Google, Booking.com, airbnb when applicable, Expedia, Delta trips, even my credit card company has a link.
Similar to the other responder, I think Booking.com had the best dataset for some random features like a hot tub (specifically big hot tub, not bathtub). The problem is that it only searches for hotels with the big hot tub, if you want that actual room you usually need to book direct.
It also yielded some good results for Japanese ryokan (traditional spa hotel), more so than the other search engines. Google is fine as well but tends to lean more towards big hotel chains IME.
Not saying its perfect (nobody does ryokan well at all), and the more familiar I get the more I'll tend to book direct, its just one search tool out of many. If it went away tomorrow I wouldn't miss it terribly.
registering an MCP server and calling an MCP tool upon transcript completion (and/or summary completion) would help (check out actionsperminute.io for the vision there).
Calendar integration would be nice to link transcripts to discrete meetings.
Interesting this is being brought up, I used to use this back in 2013 when developing Rails apps, I've been thinking about what a good "standard" spec language would look like in a post-LLM world. I wonder if the cucumber syntax could help here?
Seems to me most of the appeal of LLMs is the standard spec is just language. The next testing product is take this Jira ticket, make sure the requirements are met. Everything between code and plain language descriptions of what was built is additional complexity no one wants to deal with.
Glad to see my experience is reflected elsewhere. I've found Gemini 2.5 PRO to be the best bang-for-buck model: good reasoning and really cheap to run (counts as 1 request in cursor, where opus can blow my quotas out of the water). Code style works well for me too, its "basic" but thats what I want. If I have only one model to take to my deserted island this is the one I'd use right now.
For the heady stuff, I usually use o3 (but only to debug, its coding style is a bit weird for me), saving Opus 4 for when I need the "big guns".
I don't have Claude Code (cursor user for now), but if I did I'd probably use Opus more.
I think B2 (small) B SaaS is also in trouble pretty much now. Enterprise is a different thing though, the barriers to entry are not just building and maintaining the software.
I'm in a small non-tech business and already have 7 "vibe coded" apps used daily. Two of them are replacements for what would have been paid for software.
They wouldn't create a Xero replacement (although ironically I did vibecode a business order/finance tracking app for my own side hustle, fuck paying $30/mo when I just need the basics), they would vibecode any of the litany of small industry specific software packages they use.
Small businesses also use a spreadsheet e-mailed from person to person. A vibe-coded x_table or x_base powered app would get them 80% there with minimal cost. I'd put it in a "it depends" category of things that might or might not happen in the near future.