Hacker Newsnew | past | comments | ask | show | jobs | submit | nemo44x's commentslogin

It’s a weak take and here’s why. Huge tasks like going to the moon are made up of many different individuals that have different goals. Some are rocket scientists that want to innovate on the science of rocketry. Others are government admins with political goals.

So to call the entire thing “political” ignores the purpose of those involved and critical to the outcome at the expense of just labeling it all “political”.


What’s a better method for determining how to utilize and distribute resources? To determine where energy should be used and where it should be moved from?

Some things are just enjoyable. I get no real utility from photography - it’s not my career, it’s not a side gig, and I’m not giving prints out as gifts. Most of the shots never get printed at all. I do it because I enjoy the act itself, of knowing how to make an image frozen in time look a particular way by tweaking parameters on the camera, and then seeing the result. I furthermore enjoy the fact that I could achieve the same result on a dumb film camera, because I spent time learning the fundamentals.

Don’t apologize to these types of people. It will only make your problem worse as now you’re an admitted offender. Ignore them or better yet laugh at them to put their insane ideas back on the margins where they belong.

AI is unbelievably useful and will continue to make an impact but a few things:

- The 80/20 rule still applies. We’ve optimized the 20% of time part (a lot!) but all the hype is only including the 80% of work part. It looks amazing and is, but you can’t escape the reality of ~80% of the time is still needed on non-trivial projects.

- Breathless AI CEO hype because they need money. This stuff costs a lot. This has passed on to run of the mill CEOs that want to feel ahead of things and smart.

- You should be shipping faster in many cases. Lots of hype but there is real value especially in automating lots of communication and organization tasks.


Essentially what happened after .com bust. For years CS departments had to sell themselves and convince people there was a future in computers.

Not that AI is the same as Websites all going broke. But no one can see the future and it’s unlikely that deep technical knowledge will be obsolete.


Why shouldn’t it be? You can gather and form your thoughts, create a draft, and then have a LLM rewrite it for you. You can write in the style you prefer so you can focus on thoughts and then have the LLM rewrite it in the appropriate style for the audience.


One might worry that it would increase the authors' confidence even following their LLM rewrite errors and reduce accuracy overall regardless of moderators.


You still need to review and edit.


I was being sarcastic, because that point obviously also applies to the subjects in the experiment as well.


I think that’s possible too but the trouble is training them. LLMs are built on decades of human input. A new framework, programming language, database, etc doesn’t have that.

We are in the low hanging fruit phase right now.


If it knows the language already a new framework is a piece of cake. A few MD files explaining it is enough for the pattern recognition to kick in. I've had one LLM create a novel framework and pass them to another and it's trivial for a fresh instance to pick it up.


It’s in the best interest for companies to list publicly though. We want as many people in the country invested in as many good companies. Equity in the country is mutual self interest. Similar to why we want a nation of home owners, not renters.


Maybe he was there because he wanted to make a better life for himself and his family. Why is learning to do something because it pays well a bad thing? It’s admirable that someone would do that.


> It’s admirable that someone would would that

I guess it could be that. It sounds like you are hinting at it being like a sacrifice almost: they’d rather be doing something else but they forced themselves in to make a better life for their family. It’s like being doctor in US used to be (or still is), when someone would rather not deal with blood and guts but it’s something they’ll force themselves into for a better life.

I suppose one difference here might be if it’s their family pushing this choice or they do it intrinsically. Will they be disappointed in themselves in the end, or the person who pushed them into that path if it doesn’t work out.


The 80/20 rule doesn’t go away. I am an AI true believer and I appreciate how fast we can get from nothing to 80% but the last “20%” still takes 80%+ of the time.

The old rules still apply mainly.


Yes, so 80% of 100 hours is considerably less than 80% of 600 hours


You get 80% done in 20% of the time. The LLM shrinks that 20%. So a 100 task maybe takes 5 hours instead of 20 which is great. But the remaining 80 hours are not as improved. So a 100hr job takes ~85 hours which is very good.

This is in-line with Googles study showing about a 10% productivity increase and other research I’ve read. I suspect this will increase with more integrations and workflow adaptations.

But even after power tools changed how quickly carpenters can frame and rough-in a house, the finishing work (which uses power tools too) still takes the majority of the time.


In my experience, the last 20% tends to be the stuff that's less obvious, too, by it's very nature.

The details and pitfalls that are unique to your specific scenario, that you only discover by running into them.

And yet this less obvious, more uncommon stuff is also what AI will be weakest at.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: