Hacker Newsnew | past | comments | ask | show | jobs | submit | more TimTheTinker's commentslogin

A lot of those old calculators can be had on eBay for less than $30 shipped. I've been building up a small collection - 73, 81, 82, 83, 85, 86, 89


I am enjoying my 92+


I have an 89 but have never owned a 92. Do you think it's worth it to mess around with Fargo and the various games?


I used Webmin[0] back in the day, I wonder how more recent server web UIs like Cockpit stack up.

[0] https://webmin.com/


I looked at this and said "They've made Webmin again."


with all the potential vulnerabilities of 400 npm dependencies (I'm guessing. It's probably higher)


We don't need security. We already know your age and who you are. /s


It's neutered and not as full featured, but not bad in a pinch. All of these web admin tools are hacks that call out to shell scripts and whatnot. It requires a lot of conditional behavior and/or vertical integration. "Linux" has no consistent API for control, so its all duct tape. Webmin is the same, tbh (swap perl for whatever cockpit is written in)

45Drives uses cockpit as the UI layer of their "Houston" operating system. https://www.45drives.com/community/articles/New-Operating-Sy...


Cockpit tends to be less ad-hoc than others ime. Often it'll use dbus on the backend.

It's also socket activated, which is nice.


> All of these web admin tools are hacks that call out to shell scripts and whatnot

I love the smell of RCEs in the morning.


Yes, agreed. Though speed limits higher than 75 are not something I will ever support.*

* Unless we're talking about removing a speed limit altogether and regulating unsafe driving using other criteria.


Autonomous vehicles following proper signalling before lane changes can be safe at arbitrary speeds (see Autobahns working at all). Humans, we should limit passing speed to roughly ~5 mph delta between adjacent lanes and leave it at that.

Humans with adequate following distance in the entire lane can probably manage 10 mph delta. I routinely travel dozens of miles very safely at ~80 with the flow of traffic (including the cops), and been stressed out at 55 in the carpool lane through stop and go traffic in the right-hand lanes due to on ramps/offramps.


What happens at 76mph?


Same thing that happens at 77mph :)

I think 75 is memorable and roughly in the region where the tradeoff between increased kinetic energy and decreased time to arrival per additional unit of velocity becomes untenable.


> the tradeoff between increased kinetic energy and decreased time to arrival per additional unit of velocity becomes untenable

Sounds like a warning page out of the back of a 94 Geo Metro owner's manual.


Framing LLM use for dev tasks as "narrative" is powerful.

If you want specific, empirical, targeted advice or work from an LLM, you have to frame the conversation correctly. "You are a tenured Computer Science professor agent being consulted on a data structure problem" goes a very long way.

Similarly, context window length and prior progress exerts significant pressure on how an LLM frames its work. At some point (often around 200k-400k tokens in), they seem to reach a "we're in the conclusion of this narrative" point and will sometimes do crazy stuff to reach whatever real or perceived goal there is.


Presumably the idea is that an agent generates a Lean4 specification against which the software is measured.

But then the Lean4 specification effectively becomes the software artifact.

And we're sort of back to square 1. How do you verify a Lean4 spec is correct (and that it describes what needs to be built in the first place) without human review?


> And we're sort of back to square 1.

Specifications are smaller than the full code, just as high level code is smaller than the functionally equivalent assembly. As we ascend the abstraction ladder the amount of reading a human needs to do decreases. I don't think this should really count as "back to square 1".


That has always been the perceived promise of higher-abstraction software specs: automated code generation from something higher-level, thus making programmers increasingly obsolete.

  binary => hexadecimal instructions
  hexadecimal => assembly language
  assembly => portable, "high-level" languages (C, FORTRAN, COBOL, etc.)
  HLLs => 3GLs (BASIC, C++, Pascal, Java, C#, JavaScript, etc.)
  3GLs => 4GLs/DSLs/RADs and "low-code/no-code"[0]
Among the RADs is Microsoft Visual Basic, which along with WinForms and SQL was supposed to make business programmers nearly obsolete, but instead became a new onramp into programming.

In particular, I'd like to highlight UML, which was supposed to mostly obsolete programming through auto-generated code from object-oriented class diagrams.[1] The promise was that "business domain experts" could model their domain via visual UML tooling, and the codegen would handle it from there. In practice, UML-built applications became maintenance nightmares.

In every one of these examples, the artifact that people made "instead of programming" became the de-facto programming language, needing to be maintained over time, abstracted, updated, consumed behind APIs, etc. -- and programmers had to be called in to manage the mess.

It's interesting that Spec4 can be auto-generated, then used to generate code. My question is - what do you do when you have (a) consumers depending on a stable API, and (b) requests for new features? Maybe hand the job to Claude Code or a human developer with a suite of unit tests to guarantee API compatibility, but at that point we're back to an agent (LLM or human) doing the work of programming, with the Spec4 code as the programming language being updated and maintained.

[0] https://en.wikipedia.org/wiki/Fourth-generation_programming_...

[1] https://news.ycombinator.com/item?id=26934795


[flagged]


Does that mean your production code is lean? Or do you translate some other language code to lean to verify it?


Also a very good question btw, people do both. For some projects Lean is expressive and performant enough to use on its own (or call into using the reverse FFI), other projects use a model of a real programming language like Rust. The disadvantage of the latter is that the Lean model of Rust has to be trusted.


Do you know if there are some resources or examples of this? Especially actual production stuff, not just side projects or proof of concepts?


Cedar (https://lean-lang.org/use-cases/cedar/) at AWS use an executable Lean model of a system as an oracle for differential testing of a Rust implementation. If you can't run Lean in production, then their approach is compelling.

The idea is that you start with a Lean specification, create a fully verified implementation with respect to the spec, and then hook it and the production implementation up to a fuzzer or source or random inputs. Then you can explore a lot of the state space fully automatically.


In addition to Cedar:

[1] SymCrypt (MSR). Verified cryptographic primitives. It's in the latter style, using the Aeneas model of Rust.

[2] KLR (AWS). ML compiler. Not verified, but it's in the former style where they use pure Lean functions and interface with C code across the FFI.

[3] SampCert (AWS). Verified random sampling algorithms for differential privacy. Uses pure Lean functions and is called into via the reverse FFI.

Full disclosure I worked on 2 and 3 haha. There's also some stuff being used by cryptocurrency people but I don't follow that very closely.

[1] https://www.microsoft.com/en-us/research/blog/rewriting-symc... [2] https://github.com/leanprover/KLR [3] https://github.com/leanprover/SampCert


A lot of people habitually make bets out of addiction without thinking carefully about the expected payoff.


Anything that requires a ton of criminal law, regulation, and enforcement around it should have to meet some kind of standard of societal benefit.

The entertainment value of betting does not meet that standard, in my opinion.


I don't know - if you upload a package.json with any dependencies that map to real npmjs.com packages, it does lead you to a Stripe payment page which appears to be real... and it appears you'd be sending real money.

Maybe that's part of the joke, though :)


It's wild to watch documentaries from the 1980s where a primitive computer is said to be "a thinking machine" that is "taking most of the work out of a job".


Format.js polyfills a lot of stuff that's already supported in modern browsers, and seems to prefer its own polyfills to the native APIs.

Is there any way to force it to only use (and bundle) the polyfills that are needed assuming 2025+ era browsers?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: