Hacker Newsnew | past | comments | ask | show | jobs | submit | toss1's commentslogin

I don't think GP is talking about average users; they seem to be talking about decision-makers in organizations, e.g., a town board that wants to achieve digital independence, but is made unsure by apparent turmoil in the governance in open source orgs...

YUP

He is wrong about almost everything, and especially about introspection.

But he got lucky and wrote a good-enough-for-the-time browser at just the right time.

Now, he mistakes his luck and his F_U_Money for skill and intelligence. And why wouldn't he? He can simply walk away from any situation that makes it seem he is wrong.

And the broader problem in society is nearly the entire populace has been conditioned to ignore the factors of luck and mistake monetary success with hard work and wisdom, when in fact those people are often no more than massively amplified fools.

The massive follies of most these current robber barons makes the case for taxing them out of existence. Once someone has enough money that they and their family cannot spend it in multiple lifetimes of excessive luxury, the only reason to have more is power. We should ramp up tax rates so those people cannot accumulate that power.

Power corrupts; absolute power corrupts absolutely. A society that fails to manage that fact of human nature dooms itself.


Not at all necessarily. It could be, but it is definitely not necessarily true.

It depends entirely on usage patterns and attitudes.

I just used a piece of material that was sitting on the shelf for at least ten and probably closer to fifteen years. I'd purchased it as an off-cut from a supplier just on a thought of "this might be useful, and it's a good deal". Carried it through two moves and never got around to using it. Suddenly it turned out to be the perfect thickness for this one customer project when the expected material didn't work — never could have predicted it. Not only that, but when I went to check how much it would cost today, I literally cannot find that particular thickness. It literally saved by butt on this job.

OTOH, I do have other materials I've used once or twice, but the needs have shifted so they're going on Craigslist for free.


I once read a funny comment about a young guy told by his parents to consider what everything costs vs its value if he invested it and let it compound.

25 year old buys a knife set he didn't really need because it's on sale 50% off.

If he invested that $49, eventually it's worth $200. So buying it costs $200 by the time he's old, and clutter for 40 years.


Even more wild to read that sarcasm about "removing locks from doors for 87% speedup" is considered extreme...

And yes, we agree that running unconstrained AI agents with --dangerous-skip-confirm flags and seeing nothing wrong with it is insane. Kind of like just advertising for burglars to come open your doors for you before you get home - yeah, it's lots faster to get in (and to move about the house with all your stuff gone).


Unsurprising people complain.

"Thinking is the hardest work there is, which is why so few people do it" — attrib Henry Ford

Now we have tools that can appear to automate your thinking for you. (They don't really think, but they do appear to, so...)


“Thinking is to humans as swimming is to cats. They can do it, but they prefer not to.” - Kahneman

Oh Gawd, not this idea again!

This idea of capturing the timing of people's keystrokes to identify them, ensure it is them typing their passwords, or even using the timing itself as a password has been recurring every few years for at least three decades.

It is always just as bad. Because there are so many cases where it completely fails.

The first case is a minor injury to either hand — just put a fat bandage on one finger from a minor kitchen accident, and you'll be typing completely differently for a few days.

Or, because I just walked into my office eating a juicy apple with one hand and I'm in a hurry typing my PW with my other hand because someone just called with an urgent issue I've got to fix, aaaaannnd, your software balks because I'm typing with a completely different cadence.

The list of valid reasons for failure is endless wherein a person's usual solid patterns are good 90%+ of the time, but will hard fail the other 10% of the time. And the acceptable error rate would be 2-4 orders of magnitude less.

It's a mystery how people go all the way to building software based on an idea that seems good but is actually bad, without thinking it through, or even checking how often it has been done before and failed?


That's not what this is. at all.

You might want to check out “How it Works” on the site as none of what you said applies: https://typed.by/how

Yes. This is from that page:

>>While you type, the keyboard quietly records how you type — the rhythm, the pauses between keys, where your finger lands, how hard you press.

>>Nobody types the same way. Your pattern is as unique as your handwriting. That's the signal.

This very precisely makes my point:

Yes, the typing pattern of any human is highly and possibly even completely unique to that human — UNTIL any of a myriad of everyday issues makes it falsely deny access because the human's typing pattern has changed in a way the human can't do anything to fix at the moment.

If you are only attempting to distinguish a human from an automated system, it'll be better, until someone just starts recording the same patterns and re-playing them to this upstream process; then its a mere race to who can get their hooks in at a lower level. And someone is always going to say: "Oh, this system can identify the specific human", and we're off to the races again.

So, no. Unless you can account for ALL of the reasonable everyday failure modes, typing with either hand, any finger or combination of fingers out of commission for a minute or a lifetime, this idea will fail.


IOW, if you are doing this, it does not matter what you are doing afterwards.

You are assuming that a human's particular typing pattern is consistent, when the fact is that any number of ordinary events will render your assumption false (one or more fingers bandaged, sprained, whatever, or one hand occupied ATM).

This is not a hardware or software problem, and no amount of code, hardware, or cleverness will fix it; this is a fundamental mismatch between your assumption vs reality.


Then why does your link claim the following?

> While you type, the keyboard quietly records how you type — the rhythm, the pauses between keys, where your finger lands, how hard you press.

> Nobody types the same way. Your pattern is as unique as your handwriting. That's the signal.


I’m sceptical about this idea but, to give it full credit, it’s a custom piece of hardware that would presumably be more accurate than previous software-only attempts. Maybe it will actually work this time, idk, although I still don’t really see the point.

Vibe copy is a hell of a drug.

can confirm. am weird enough to routinely flag as "inhuman".

thaaaaaaaaanks


True, and this sounds very cool.

But might it just be easier to develop and apply similar sugar adhesives, or other compatible or soluble adhesives (in quantities that will not affect the recycling process)?

OFC, if you never introduce anything new, it is easier to feel like it is a "pure" process. Yet, what says the heat treatment isn't actually creating new molecules that could be recycling-incompatible, even though they never "add" any new material?


Or we could use the sugar-based adhesives that people have already researched half a millennium ago.

>>I'm always confused as hell how little insight we have in memory consumption.

>>I look at memory profiles of rnomal apps and often think "what is burning that memory".

Because companies starting with Microsoft approach it as an infinite resource, and have done so literally for generations of programmers — it is now ancient tradition.

Back in the x86 days when both memory and memory handles were constrained (64k of them, iirc) I went to a MS developer conference. One problem starting to plague everyone was users' computers running out of memory when actual memory in use was less than half, and the problem was not that memory was used, but all available handles were consumed.

I randomly ended up talking to the (at the time) leader of the Excel team, so I thought I'd ask him about good practices, asking "Does it make sense to have the software look at the task and make an estimate of the full amount of RAM required and allocate it off one handle and track our usage ourselves within that block?" I was speechless when he answered: "Sure, if you wanted to optimize the snot out of it — we just allocate another handle."

That two-line answer just blew my mind and instantly explained so much about problems I saw at the time, and since.

It also made sense in the context of another talk they gave at a previous conference where the message was they anticipate the increased power of the next generation of hardware and write their new version for that hardware, not the then-current hardware. It makes sense, but in the new light, it seems almost like a cousin of planned obsolescence — "How can we squander all the new power Intel is giving us?". And the result was decades after word processing and spreadsheets had usable performance on 640K DOS machines, new machines with orders of magnitude more power and RAM, actually run slower from a user perspective.

I'm hoping this memory crunch (having postponed a memory upgrade for my daily driver and now noticing it is 10x the price) will at least have the benefit of driving developers to maybe get back some craft of designing in optimization.


Software engineers seem to be more and more abstracted from the hardware they use. You also (rarely) back in the day had to worry about things like IRQ ports and optimising for tiny amounts of latency.

Personally I am fine with programmers not spending tons of time optimising down to every last piece because we do have so much more ram and compute relative to the old days. My bigger issue is that things are also a laggy mess even when there is plenty of resources available. I understand these things go hand in hand but I would much rather see more optimisations for the things users will actually notice than just going for metrics. A nice combo of the two would be ideal.

That being said what's probably most appalling is the amount some modern programs hard crash even when they have plenty of resources.


Isn't this conversation, not publishing scientific hypotheses, theories and findings?

If so, it is customarily permissible to use rhetoric and sarcasm to more strongly emphasize a point. Or, to leave the conclusion as an exercise for the reader.


By intentionally hiding their position (and simultaneously acting as though it is completely obvious) the OP shuts down any useful conversation that might follow. Do they think Meta will sell the user's data? Do they think different people are in charge of different policies at Meta leading to actions that appear to be in conflict with each other? Do they think they will use this information to train AI models? Do they think they will use this information to serve Ads?

There are many interesting ways that the conversation could have been carried forward but there is no way to continue the conservation as the OP doesn't make it clear what they think.

The only thing I can say is: No I cannot figure it out, please tell me what you're trying to say here.


> The only thing I can say is: No I cannot figure it out

On the contrary, looks like you can:

> (…) sell the user's data (…) use this information to train AI models (…) use this information to serve Ads


What’s the point in providing a rebuttal to these points (e.g. that Meta doesn’t actually sell data to anyone) if the OP can simply say “that’s not what I meant”?

They are taking a position that cannot be argued against or even discussed because they don’t make that position clear.


You are the only one arguing here. Not every conversation is an invitation to argument.

> providing a rebuttal to these points (e.g. that Meta doesn’t actually sell data to anyone)

So one of your suggestions of what the OP could mean was something you explicitly don’t think is true and would argue against? That sounds like a bad faith straw man set up.

Perhaps it’s just as well that the OP didn’t provide one specific reason to be nitpicked ad nauseam by an army of “well ackshually” missing the forest for the trees.

You could, as the HN guidelines suggest, argue in good faith and steel man. The distinction between “selling your data” and “profiting from your data” isn’t important for a high level discussion.

Can you truly not see through Meta’s intentions? There are entire published books, investigations, and whistleblowers to reference. Zuckerberg called people “dumb fucks” for trusting him with their data and has time and again proven to be a hypocrite who doesn’t care about anyone but himself.


Or, OP is not hiding their position and shutting down conversation — they are not imposing their position and are opening it up to discussion.

What prevents you from saying "Yes, and Xyz!!" and another poster "Yup, and Pdq, and Foo too!"

Or, maybe OP is just being a bit lazy, but again, it seems the context is conversation, not formal scientific inquiry where everything must be falsifiable?


I think they meant that Meta is offloading the cost (fines) of farming minor's data onto the operating systems. With an up-front cost of 2 billion dollars in lobbying, they can avoid paying 300m+ fees regularly.

Indeed!!

If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.

Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.


Weird. No activity or response to an obscure post beyond a couple upvotes. Then, the next day a brigade no-engagement downvotes. IDC, but seems like some corporate image management trying to hide negative takes on Google properties? Sheesh

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: