Hacker Newsnew | past | comments | ask | show | jobs | submit | sigotirandolas's commentslogin

I don't look at whether the text is written by an LLM but at whether it has substance and whether the writer understands what they are doing and is respecting my time.

If the text is full of punchy three word phrases or nonsense GenAI images then that's an obvious sign. But so is if the other person has some revolutionary project with great results but they can't really explain why their solution works where presumably many failed in the past (or it's a word salad, or some lengthy writing that doesn't show any signs of getting you to an "aha, that's some great insight" moment).

A good sign is also if the author had something interesting going before 2022, and they didn't fall into the earliest low quality LLM waves. Unfortunately some genuinely talented people have started using LLMs to turbocharge their output while leaving some quality on the table nowadays, so I don't really know. I'm becoming a lot more sceptical of the Internet, to be honest.


To be devil's advocate:

Many of those tools are overpowered unless you have a very complex project that many people depend on.

The AI tools will catch the most obvious issues, but will not help you with the most important aspects (e.g. whether you project is useful, or the UX is good).

In fact, having this complexity from the start may kneecap you (the "code is a liability" cliché).

You may be "shipping a lot of PRs" and "implementing solid engineering practices", but how do you know if that is getting closer to what you value?

How do you know that this is not actually slowing your down?


It depends a lot on what kind of company you are working at, for my work the product concerns are taken care by other people, I'm responsible for technical feasibility, alignment, design but not what features should be built, validating if they are useful and add value, etc., product people take care of that.

If you are solo or in a small company you apply the complexity you need, you can even do it incrementally when you see a pattern of issues repeating to address those over time, hardening the process from lessons learnt.

Ultimately the product discussion is separate from the engineering concerns on how to wrangle these tools, and they should meet in the middle so overbearing engineering practices don't kneecap what it is supposed to do: deliver value to the product.

I don't think there's a hard set of rules that can be applied broadly, the engineering job is to also find technical approaches that balance both needs, and adapt those when circumstances change.


On the one side I reject that product and engineering concerns are separated: Sometimes you want to avoid a feature due to the way it will limit you in the future, even if the AI can churn it in 2 minutes today.

On the other side perhaps your company, like most, does not know how to measure overengineering, cognitive complexity, lack of understanding, balancing speed/quality, morale, etc. but they surely suffer the effects of it.

I suspect that unless we get fully automated engineering / AGI soon, companies that value engineers with good taste will thrive, while those that double down into "ticket factory" mode will stagnate.


> On the one side I reject that product and engineering concerns are separated: Sometimes you want to avoid a feature due to the way it will limit you in the future, even if the AI can churn it in 2 minutes today.

That is exactly not what I meant, I'm sorry if it wasn't clear but your assumption about how my job works is absolutely wrong.

I even mention that the product discussion is separate only on "how to wrangle these tools":

> Ultimately the product discussion is separate from the engineering concerns on how to wrangle these tools, and they should meet in the middle so overbearing engineering practices don't kneecap what it is supposed to do: deliver value to the product.

Delivering value, which means also avoiding a feature that will limit or entrap you in the future.

> On the other side perhaps your company, like most, does not know how to measure overengineering, cognitive complexity, lack of understanding, balancing speed/quality, morale, etc. but they surely suffer the effects of it.

We do measure those and are quite strict about it, most of my design documents are about the trade-offs in all of those dimensions. We are very critical about proposals that don't consider future impacts over time, and mostly reject workarounds unless absolutely necessary (and those require a phase-out timeline for a more robust solution that will be accounted for as part of the initiative, so the cost of the technical debt is embedded from the get-go).

I believe I wasn't clear and/or you misunderstood what I said, I agree with you on all these points, and the company I work for is very much in opposite to a "ticket factory". Work being rejected due to concerns for the overall impact cross-boundaries on doing it is very much praised, and invited.

My comment was focused on how to wrangle these tools for engineering purposes being a separate discussion to the product/feature delivery, it's about tool usage in the most technical sense, which doesn't happen together with product.

We on the engineering side determine how to best apply these tools for the product we are tasked on delivering, the measuring of value delivered is outside and orthogonal to the technical practices since we already account for the trade-offs during proposal, not development time. This measurement already existed pre-AI and is still what we use to validate if a feature should be built or not, its impact and value delivered afterwards, and the cost of maintaining it vs value delivered. All of that includes the whole technical assessment as we already did before.

Determining if a feature should be built or not is ultimately a pairing of engineering and product, taking into account everything you mentioned.

Determining the pipeline of potential future non-technical features at my job is not part of engineering, except for side-projects/hack ideas that have potential to be further developed as part of the product pipeline.


Sorry, I think you're right that I misinterpreted your comment. I still had in mind OP's example (BDD, mutational testing, all that jazz). I apologize!

Reading your comment, it looks like you work for a pretty nice company that takes those things seriously. I envy you!

My concern was that for companies unlike yours that don't have well established engineering practices, it _feels_ that with AI you can go much faster and in fact it's a great excuse to dismantle any remaining practices. But, in reality they either doing busywork or building the wrong thing. My guess is that those are going to learn that this is a bad idea in the future, when they already have a mess to deal with.

To put what I mean into perspective... if you browse OP's profile you can find absolutely gigantic PRs like https://github.com/leynos/weaver/pull/76. I can not review any PR like that in good faith, period.


The most annoying thing is that even after cleaning up all the nonsense, the tests still contain all sort of fanfare and it’s essentially impossible to get the submitter to trim them because it’s death by a thousand cuts (and you better not say "do it as if you didn’t use AI" in the current climate..)


That’s also another thing. Sometimes the output is just junk, like there wasn’t really any intention behind the test to prevent a certain likely scenario arising

Sometimes it just add tests that lock in specific quirks of the code that weren’t necessarily intentional


Yep. We've had to throw PRs away and ask them to start over with a smaller set of changes since it became impossible to manage. Reviews went on for weeks. The individual couldn't justify why things were done (and apparently their AI couldn't, either!)


Luckily those I work with are smart enough that I've not seen a PR thrown away yet, but sometimes I'm approving with more "meh, it's fine I guess" than "yeah, that makes sense".


N~10^(6.5) aliens.


I hope this too but it's not a given, IMO. Previously people without technical chops failed quickly by being unable to deliver working code, now they can deliver mediocre code with the damage only becoming clear years later. It breaks the "can deliver code --> good technical ability" proxy and even after the initial damage wave, it's unclear if we will find a better proxy.


This looks like LLM slop (not only the writing, but the analysis itself).

To start, it is based on a single machine check. It has little context, but if this was a common problem, I'd expect more data points.

The MC happened 8 hours after the freeze. It's not unusual that a hardware/kernel failure cascades to multiple subsystems so I'd be sceptical that the MC has a direct relationship to the root cause of the freeze.

(Part 13) Why does it quote an Engineering Change Notice rather than a consolidated spec?

(Part 11) LaneErrStatus=0xFFFFFFFF is 32 bits. As far as I know, PCIe x32 is very rare.

(Part 8.4) How is it surprising MMIO isn't included in memory dumps?

(Part 4) How is the definition of a MCE relevant here?


The security model is that applications run in a sandbox (e.g. Flatpak, snap) and only get D-Bus, Wayland, etc. access via restricted means (e.g. xdg-dbus-proxy).

The "friction" is that Wayland developers don't want a sandboxed application with access to the Wayland socket to pwn your machine.

Trying to isolate applications within the same UNIX user is essentially unfixable since there's ptrace, LD_PRELOAD, /proc/$pid, .bashrc drop-ins, etc.

The author must know about all of this, as it's mentioned in the LD_PRELOAD note in the end. In my view, the model he proposes is security by obscurity (putting hurdles on top of a fundamentally insecure system).


There's definitely some that hold CQRS, DDD, TDD, ... as _the_ way to design software and over-engineer around it, so I can understand some pushback.

Knowing those patterns is very helpful as a way to think about design problems, as long as you have the common sense to realize applying the pattern "by the book" is often overkill and you can just take some ideas out of it.

That article conflates as "Pure engineering" both reducing a software system to a small set of cohesive concepts, and architecture astronauts, when those are polar opposites.


When all leadership is asking is "what is the short term business value?", it's pointless to make that case. It's much easier to measure "yet another feature" than "fix the root causes of what makes our product subpar and slows us down". Not only that, but an incompetent engineer's "tech debt grooming" may make things worse.

I think that this may eventually become better now that there isn't so much dumb money around (no ZIRP) and with AI assistants taking on some low-effort work (enabling companies to lay off incompetent engineers). But it will take many years for companies to adapt and the transition won't be pretty.


I think AI will make it worse since it will allow the incompetent engineers to do much more harm.


In the short term, definitely.

In the long term, once the damage from vibecoding is better understood (for customer impact and team morale), there's an incentive to push them out, both from the leadership and the individuals side.


I assume it can be different for everyone. This post resonates with me, but my social anxiety mixes being sensitive to negative feedback and low self-esteem.

So, you want to avoid both being disliked, but also being liked - because this puts you in novel situations you fear lead to an even bigger failure down the road.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: