Hacker Newsnew | past | comments | ask | show | jobs | submit | xmcqdpt2's commentslogin

Java has Security Managers. I've never seen anyone use it in practice though, so it probably doesn't work very well.

I think it would be hard to get any kind of usable capability system without algebraic effects like those of Koka or Scala libraries.

EDIT: Apparently Security Managers are deprecated and slated for removal.


From a purely aesthetic point of view, this is what I enjoy so much about C and Fortran. You look up a Rust crate and it hasn't been touched in 5 years. That means it's unmaintained and unusable, don't add it to your project or you will have a bad time.

You find a C or fortran library that hasn't been touched in 20 years and (sometimes) it's just because it's complete and there hasn't been any reason to update any parts of it. You can just add it to your project and it will build and be usable immediately. I wish we had more of those.


Clojure and Go are similar to this from my experience.

JVM is fast for certain use cases but not for all use cases. It loads slowly, takes a while to warm up, generally needs a lot of memory and the runtime is large and idiosyncratic. You don't see lots of shared libraries, terminal applications or embedded programs written in Java, even though they are all technically possible.

You don't even need to go all V8, you could just build something like LuaJIT and get most of the way there. LuaJIT is like 10k LOCs and V8 is 3M LOC.

The real reason is that it is a deliberate choice by the CPython project to prefer extensibility and maintainability to performance. The result is that python is a much more hackable language, with much better C interop than V8 or JVM.


It's interesting that branches, which is a marquee feature of git, became less important at the same time as git ate all the other vcs. Outside of OS projects, almost all development is trunk based with continuous releases.

Maybe branching was an important reason to adopt git but now we'd probably be ok with a vcs that doesn't even support them.


Not sure if it's true. I mean, I do agree with the core of it, but how do you even do PRs and resolve conflicts, if there are no branches and a developer cannot efficiently update his code against the last (remote) version of master branch?

Trunk based development has every developer in the company committing straight to main - no PRs, supposedly no merge conflicts (but reality is that main moves fast and if someone else is working in the same files as someone else, there will be merge conflicts)

A middle ground is small PRs where people are constantly rebasing to the tip of main to keep conflicts to a minimum


Trunk based development is still a hotly debated topic. I personally prefer branches at this point in time, trunk based development has caused me more trouble than it's claimed worth in the past, BUT that could be a me limitation rather than a limitation of the style

To start, this is more or less an advertising piece for their product. It's pretty clear that they want to sell you Allium. And that's fine! They are allowed! But even if that was written by a human, they were compensated for it. They didn't expend lots of effort and thinking, it's their job.

More importantly, it's an article about using Claude from a company about using Claude. I think on the balance it's very likely that they would use Claude to write their technical blog posts.


> They didn't expend lots of effort and thinking, it's their job.

Your job doesn't require you to think or expend effort?


Then pangram isn't very good, because that article is full of Claude-isms.

> because that article is full of Claude-isms

Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.


Real talk. You're not just making a good point -- you're questioning the dominant paradigm

Horrible

I'm sure some human writers would write:

> The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle.

> The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it.

> *Tests verify the code as written; a behavioural specification asks what the code is for.*

However this is a blog post about using Claude for XYZ, from an AI company whose tagline is

"AI-assisted engineering that unlocks your organization's potential"

Do you really think they spent the time required to actually write a good article by hand? My guess is that they are unlocking their own organizations potential by having Claude writes the posts.


> Do you really think they spent the time required to actually write a good article by hand?

Given I'm familiar with Juxt since before, used plenty of their Clojure libraries in the past and hanged out with people from Juxt even before LLMs were a thing, yes, I do think they could have spent the time required to both research and write articles like these. Again, won't claim for sure I know how they wrote this specific article, but I'm familiar with Juxt enough to feel relatively confident they could write it.

Juxt is more of a consultancy shop than "AI company", not sure where you got that from, guess their landing page isn't 100% clear what they actually does, but they're at least prominent in the Clojure ecosystem and has been for a decade if not more.


Your guess is worth what you paid for it.

Is it possible for a tool to know if something is AI written with high confidence at all? LLMs can be tuned/instructed to write in an infinite number of styles.

Don't understand how these tools exist.


The WikiEDU project has some thoughts on this. They found Pangram good enough to detect LLM usage while teaching editors to make their first Wikipedia edits, at least enough to intervene and nudge the student. They didn’t use it punatively or expect authoritative results however. https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipe...

They found that Pangram suffers from false positives in non-prose contexts like bibliographies, outlines, formatting, etc. The article does not touch on Pangram’s false negatives.

I personally think it’s an intractable problem, but I do feel pangram gives some useful signal, albeit not reliably.


It has Claude-isms, but it doesn't feel very Claude-written to me, at least not entirely.

What's making it even more difficult to tell now is people who use AI a lot seem to be actively picking up some of its vocab and writing style quirks.


Pangram has a very low false positive rate, but not the best false negative rate: https://www.pangram.com/blog/third-party-pangram-evals

You sound like a flat earther and a moon landing denier combined.

emacs + bitmap font for me and I'm continually shocked when I have to use something else by how it is blurry and laggy.

As is often repeated, the optimal amount of fraud is not zero

https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...

They are optimizing towards making it easy to purchase things on a whim.


As I’ve noted, I’m referring to things far beyond just purchases.

AI providers also claim to have small marginal costs. The costs of token is supposedly based on pricing in model training, so not that different from eg your server costs being low but the content production costs being high. And in many cases AI companies are direct competitors (artists, musicians etc.)

(TBH it's not clear to me that their marginal costs are low. They seem to pick based on narrative.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: