Hacker Newsnew | past | comments | ask | show | jobs | submit | jordwest's commentslogin

Same, I think there's an idealistic belief in people who write those tickets that something can be perfectly specified upfront.

Maybe for the most mundane, repetitive tasks that's true.

But I'd argue that the code is the full specification, so if you're going to fully specify it you might as well just write the code and then you'll actually have to be confronted with your mistaken assumptions.


> I suspect the wealthy think they can shield themselves by exerting control over

Agreed and I think this is a result of a naive belief that we humans tend to have that controlling thoughts can control reality. Politicians still live by this belief but eventually reality and lived experience does catch up. By that time all trust is long gone.


It would be kinda funny if not so tragic how economists will argue both "[productive improvement] will make things cheaper" and then in the next breath "deflation is bad and must be avoided at all costs"

But is it really, though? Dollars aren't meant to be held.

I think the idea of dollars as purely a trading medium where absolute prices don't matter wouldn't be such an issue if wages weren't always the last thing to rise with inflation.

As it is now anyone with assets is only barely affected by inflation while those who earn a living from wages have their livelihood eroded over time covertly.


Exactly as the current owners… ahem, leaders of this country want it.

Barely affected? They benefit massively from it. That is why the rich get richer.

True, in terms of share of the pie for sure

Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.

But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.

I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.

Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.


Charlie Chaplin's speech is more relevant now than ever before:

https://www.youtube.com/watch?v=J7GY1Xg6X20


I first saw this about 15 years ago and it had a profound impact on me. It's stuck with me ever since

"Don't give yourselves to these unnatural men, machine men, with machine minds and machine hearts. You are not machines, you are not cattle, you are men. You have the love of humanity in your hearts."

Spoken 85 years ago and even more relevant today


The thing that the ultra-wealthy desire above all else is power and privilege, and they won't be getting either of that in those bunkers.

They sure as shit won't be content to leave the rest of us alone.


Yeah I know it's an unrealistic ideal but it's fun to think about.

That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death.


Going off your earlier comment, what if instead of a revolution, the oligarchs just get hooked up to a simulation where they can pretend to rule over the rest of humanity forever? Or what if this already happened and we're just the peasants in the simulation

This would make a good black mirror episode. The character lives in a total dystopian world making f'd up moral choice. Their choices make the world worse. It seems nightmarish to us the viewer. Then towards then end they pull back, they unplug and are living in a utopia. They grab a snack, are greeted by people that love and care about them, then they plug back in and go back to being their dystopian tech bro ideal self in their dream/ideal world.

I like this future, the Meta-verse has found its target market

> Perhaps there is some sort of failure of SWE's to understand that businesses don't care

I think it's an engineer's nature to want to improve things and make them better, but then we naively assume that everybody else also wants to improve things.

I know I personally went through a pretty rough disillusionment phase where I realised most of the work I was asked to do wasn't actually to make anything better, but rather to achieve some very specific metrics that actually made everything but that metric worse.

Thanks to the human tendency to fixate on narratives, we can (for a while) trick ourselves into believing a nice story about what we're doing even if it's complete bunk. I think that false narrative is at the core of mission statements and why they intuitively feel fake (mission statement is often more gaslighting than guideline - it's the identity a company wants to present, not the reality it does present).

AI is eager to please and doesn't have to deal with that cognitive dissonance, so it's a metric chaser's dream.


The tendency to push and pull on sensations has significantly fallen away for me, although I wouldn't say it's "gone" I might be able to add something here.

I would agree with Shinzen Young that almost all suffering is caused by this tendency. Not pain or emotional states, but certainly the suffering associated with them.

> I don’t think tanha has 1000 contributing factors; I think it has one crisp, isolatable factor.

I think this is a good intuition. I would suggest if you want to get to the root of it you might like to explore it experientially first. Everyone reading this has direct access to this mechanism in subjective experience, without the need for any neuroimaging equipment or meditating for lifetimes in a cave. Additionally, when you see it for what it is, it begins to unwind.

If I were to put it down to one thing, it would be the tendency to reflect on reality and then to believe that that reflection is actually reality. It sounds simple, and it is, but the tendency runs so deep and there are a multitude of beliefs that get in the way of actually seeing things clearly. If you're willing to question every single one of those beliefs right down to the places you never thought to question or that seem scary to question, then it's available to you.

Edit: A great resource for anyone interested in exploring these beliefs is Kevin Schanilec's interpretation of the 10 fetters at simplytheseen.com. Just a word of warning though, explore this only if you're willing to have your perception of reality turned upside down (and not just intellectually).


Thank you for saying this. This drove me up the wall everywhere I worked that had those arbitrary ~300 line rules as all that happened was code review devolved into "I don't like this code pattern, use another code pattern". While the larger architectural ideas that actually became problematic were rarely reviewed since they weren't obvious when split across 5 PRs often with different reviewers.

Honestly I think it would be far more effective to just review a paragraph and maybe a diagram that explains "here's how I think I'm going to tackle this problem" and forget about line-by-line code review entirely. Other than for training juniors, I don't think there's much long term value in "I think you should use an anonymous function here instead of a named function".

The kinds of things that are usually brought up in code review are not what contributes to real long term technical debt, because a function name or code formatting can be changed in an instant, while larger architectural decisions cannot.

The other thing I noticed is that even when an architectural issue is obvious, there's a tendency to not want to change it because so much of the work has already been done in preparing the PR for review. If you point out a flaw in an architectural decision, it's not unusual for the person to reasonably say "I've just put together a chain of 5 PRs and now you're asking me to rewrite everything?"


Daniel Mackler has a whole YouTube channel about this

https://youtu.be/eBRNIvK3HqA

My experience matches very much with this thread. After years of therapy I hit a limit to what conventional psychology could explain or understand or “treat”, and the only thing that worked after that was going deeper into my own psyche with meditation.

The whole psyche is available for exploration when you stop believing that you are made of thoughts. It becomes extremely clear where all the anxiety and depression and addiction comes from, and that almost all conventional approaches merely treat the symptoms.

I also took some intro psych at university and remember that in general Freud’s was sort of accepted by mainstream psych as the de facto most “correct” and logical view of psychotherapy while Jung was considered a bit of a weirdo, and I accepted this at the time. However through my own experiences now I think Jung was much closer to the truth, particularly around what he calls the “shadow”.


Thanks for the link :)


I guess there is a convergence of ideas going on here because I've actually been tinkering on something like this, a Notion-database-like that just uses flat CSV files (and internally reads it into an in-memory SQLite for filtering, grouping and displaying) then schema files for interpreting the data and displaying it nicely.

Here's a little demo of what I've got working: https://youtu.be/LCR9pAc_xn0.

It's currently still very rough and I'm just using it myself but hoping to open source it at some point.


Great article, I also have no doubt we're in for some serious changes in society. But I disagree with the conclusion. I think the ideal place to be is neither optimistic nor pessimistic, but neutral. A conviction that the future is bright isn't enough to actually make the future bright.

I was optimistic when Facebook first came out, and it turned into a dystopian nightmare. I was optimistic when self checkouts appeared, and now I feel like I'm a criminal by default every time I get groceries. I was optimistic about more people being online, and now we all feel like we can't get off it. I've become increasingly cynical about tech because of these and a thousand other disappointments about the utopia that was promised turning into not much more than an efficient means of extracting money from all of us.

I don't think it's ever a good idea to be totally optimistic or pessimistic about a new technology, because then we're liable to ignore anything that goes against our narrative. Really, we don't know, and we can't know what's best for ourselves, usually until it's too late.


Well said - all of this resonates with me. We have to try to ensure that these new tools do not create more dystopian nightmares as well based on our experience of the past couple of decades.


I believe that'll be impossible. These new tools inherently shift power away from labor and towards capital, in a world that is already heavily skewed that way. The inevitable result will be even worse lives for most people, while the very rich will be able to live in even greater excess.


Maybe not impossible, but very hard indeed. Imagine this was the start of the search engine era. Everyone jumped on Google as it was hip and they promised not to be evil. We all know how that panned out.

What we need is open source implementations which governments can take and provide for their citizens. Yes I know many will laugh at this idea, but in principle it is the right thing to do. If these services become a part of everyday life, it is the responsibility of government to make it available to their citizens in a way that is not harmful.


Funnily enough, the closest to that is China with Deepseek. Both of which are demonized by the VC class. Funny, that.


I have been thinking a lot about this parable (introduced to me by Bluey) in relation to the current surge of GenAI https://btribble.com/well-see/

> A conviction that the future is bright isn't enough to actually make the future bright.

Well said.

We have to be curious, open, and ready to use the benefits of technological change, but also skeptical, concerned, and willing to defend social and political norms, too. Wild optimism is potentially very dangerous; hardline pessimism would leave us insufficiently ready to face the moment when we do need to act proactively.


Tailing off the pessimism part:

A lack of conviction that the future is bright is enough to make the future dark.


Well, it's at least enough to make the future dark for us. If you believe that there isn't any future worth having, and act on that belief, then your future is likely to be fairly crummy.

And, if enough people in a society believe and act that way, the future is likely to be crummier for the society as a whole, too.


I love that parable. It stands in such contrast to a world where we all feel a pressure to know what’s right and wrong, how things will go etc. The reality is none of us do.


I'm curious where you think this pressure comes from. I don't necessarily feel it myself but a lot of message board content I consume seems to react to this pressure, a pressure to have conviction in right and wrong. Is there something in the zeitgeist you feel that pushes people to have a strong stance?


I can only speak for myself, but what I discovered internally were a few mechanisms of my mind:

1. A fear of the unknown. The mind tries to construct a knowable future in order to feel safe and prepared. It doesn’t matter how accurate it is, it just chooses what’s comfortable.

2. A desire to fit in, be accepted or loved. If our beliefs are reflected by others, it means we are accepted. If the beliefs are attacked, it’s not just a challenge to a belief system but it’s an attack on me and my validity as a person.

I think both of these seem to combine and play out online as we find groups of people who have acceptable beliefs to us. For some people, total pessimism is actually the comfortable perspective (that was me for sure) and for others maybe a blind optimism makes them feel safe.

When both of these fell away I realised how distorting they had been to my world view and my ability to actually engage with changes.


> I don't think it's ever a good idea to be totally optimistic or pessimistic about a new technology, because then we're liable to ignore anything that goes against our narrative.

A healthy dose of skepticism is always necessary, that goes for anything in life. That is obviously also true for AI.

The backdrop of this post is what I believe to be an disproportionate amount of skepticism right now in my circles that is largely based on things you can already refute. There for sure will be knock-on effects from this we cannot anticipate yet, and when we see inklings of that we will need to deal with that.

Facebook and social media is a good example of this, because we did miss the effects it had or we dismissed them. However if you go back to what the original skepticism of Facebook was (mostly privacy), it mostly missed the actual dramatic effects social media had on the psyche of children years later.


How does self-checkout make you feel like a criminal? I didn’t follow that one.


Not OP, but the way self-checkouts are built does make one feel like they are being treated as a potential thief. There are cameras in each till recording you as you are scanning your items. The moment you scan your item, the thing keeps nagging you to put it in the basket because it wants to weigh it to make sure you are not placing anything extra. If for whatever reason you don't want to put it in the basket, it doesn't allow you to scan anything else. The list goes on.

Yes I know all about "shrinkage" but self-checkouts just feel like some smart arse built a machine which barely replaces the person at the till and sold it to supermarkets as a feature.


Oh, I see. I live in Japan where self-checkouts are extremely common and have no particular security, so I’m not familiar with those systems.


Ahh perhaps it’s because Japan is such a high trust society. One of the few places where you can typically expect your lost wallet returned with all the cash inside.

Here in Australia, several Coles supermarkets have installed exit gates that track you on camera as you walk through the self checkout area, and if the machine decides you didn’t pay, it will physically close the gate and flash red.

Woolworths also have cameras in front and above each self checkout, and when the machine inevitably decides that the weight of your reusable grocery bag is an attempt to steal something, it will replay the footage from above to the assistant.

I have actually noticed I’m hassled far less by the Aldi self checkouts, they seem to need a lot less operator intervention so I tend to shop there more nowadays.


This is the main reason why I stopped using self-checkouts entirely. I'd rather wait in line than feel like I'm a suspect. Also, there is too much risk of a false theft accusation. Better to let a store employee do the work and not expose myself to that risk.


Looool, your post is "let's be neutral", and then going on about making a list of how EVERYTHING has gone not neutral, but ACTIVELY BAD.

That's exactly why we need to have a pessimist view on these kind of tech, because we've monitored them for a while now, and they've shown that THEY MOSTLY BRING BAD CHANGES (except to a select few white rich westerners, who is the main population here, so I'm expecting the downvotes, and who lack real empathy, like the OP, to understand what "keeping their privileges" is doing to other vulnerable populations)


I could go on with a list of ways I think technology has been good too - the internet allowing people to connect across the world almost for free, less estrangement from other cultures, new artistic mediums and ways to share art with other people, enabling indie writers, artists, and engineers to sell their work to the whole world and make a living from it. Those are things I think most people would be happy to see more of.

I agree though that (particularly in the last decade) tech advancements have been more heavily weighed towards exploitative than helpful.


Well, so again we agree, that's exactly why we need pessimists indeed: as of late, most developments have been for the worse, not the better


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: