Hacker Newsnew | past | comments | ask | show | jobs | submit | ksec's commentslogin

If we think of Go as different kind of C, then having Go as a compiled target seems to make sense as C is a compiled target.

I dont get your reasoning.

Yes. I am bookmarking it for future reference. The post shows a perfect example why you need marketing and PR agency.

The situation is incredibly complex, and explaining it in full would need a book. The blog post is clear enough for the prople who have followed LibreOffice as a project, while other people have to do some research to understand all the details.

Just a copy editor would suffice.

>Next: the runtime itself. Bun has a bun build --compile flag that produces a single self-contained executable. No runtime, no node_modules, no source files needed in the container.

I didn't know that. So Bun is basically a whole runtime + framework all in one with little to no deployment headaches?


The bun build creates a large self-contained executable with no optimisations. Almost like a large electron build.

Deno also provides the same functionality, but with a smaller optimized binary.

Appreciate Bun helping creating healthy competition. I feel like Deno falls under most people's radar often. More security options, faster than Node, built on web standards.


Deno's security options are very useful for AI sandboxes. Broader than node's permissions. Bun badly needs the same.

There's a PR for Bun that gives the same security but it's been sitting for months https://github.com/oven-sh/bun/pull/25911

I want to migrate an existing project to Bun but cannot until it has a security permission system in place.


I was curious:

  $ cat app.ts
  console.log("Hello, world!");
  $ cat build
  #!/usr/bin/env bash
  
  bun build --compile --outfile bun-darwin-arm64         --target bun-darwin-arm64         app.ts
  bun build --compile --outfile bun-darwin-x64           --target bun-darwin-x64           app.ts
  bun build --compile --outfile bun-darwin-x64-baseline  --target bun-darwin-x64-baseline  app.ts
  bun build --compile --outfile bun-linux-arm64          --target bun-linux-arm64          app.ts
  bun build --compile --outfile bun-linux-arm64-musl     --target bun-linux-arm64-musl     app.ts
  bun build --compile --outfile bun-linux-x64            --target bun-linux-x64            app.ts
  bun build --compile --outfile bun-linux-x64-baseline   --target bun-linux-x64-baseline   app.ts
  bun build --compile --outfile bun-linux-x64-modern     --target bun-linux-x64-modern     app.ts
  bun build --compile --outfile bun-linux-x64-musl       --target bun-linux-x64-musl       app.ts
  bun build --compile --outfile bun-windows-arm64        --target bun-windows-arm64        app.ts
  bun build --compile --outfile bun-windows-x64          --target bun-windows-x64          app.ts
  bun build --compile --outfile bun-windows-x64-baseline --target bun-windows-x64-baseline app.ts
  bun build --compile --outfile bun-windows-x64-modern   --target bun-windows-x64-modern   app.ts
  
  deno compile --output deno-x86_64-pc-windows-msvc    --target x86_64-pc-windows-msvc    app.ts
  deno compile --output deno-x86_64-apple-darwin       --target x86_64-apple-darwin       app.ts
  deno compile --output deno-aarch64-apple-darwin      --target aarch64-apple-darwin      app.ts
  deno compile --output deno-x86_64-unknown-linux-gnu  --target x86_64-unknown-linux-gnu  app.ts
  deno compile --output deno-aarch64-unknown-linux-gnu --target aarch64-unknown-linux-gnu app.ts
  $ ls -1hs
  total 1.6G
  4.0K app.ts
  4.0K build
   59M bun-darwin-arm64
   64M bun-darwin-x64
   64M bun-darwin-x64-baseline
   95M bun-linux-arm64
   89M bun-linux-arm64-musl
   95M bun-linux-x64
   94M bun-linux-x64-baseline
   95M bun-linux-x64-modern
   90M bun-linux-x64-musl
  107M bun-windows-arm64.exe
  110M bun-windows-x64-baseline.exe
  111M bun-windows-x64.exe
  111M bun-windows-x64-modern.exe
   77M deno-aarch64-apple-darwin
   87M deno-aarch64-unknown-linux-gnu
   84M deno-x86_64-apple-darwin
   92M deno-x86_64-pc-windows-msvc.exe
   93M deno-x86_64-unknown-linux-gnu
  $
Maybe I'm missing some flags? Bun's docs say --compile implies --production. I don't see anything in Deno's docs.

Where? bun's doc site search engine doesn't show it but there's an open PR on the topic.

https://github.com/oven-sh/bun/issues/26373

Doc site says: --production sets flag --minify, process.env.NODE_ENV = production, and production-mode JSX import & transform

Might try:

   bun build --compile --production --bytecode --outfile myapp app.ts

D'oh, it wasn't the doc site. I was lazy:

  $ bun build --help | grep Implies
      --compile                             Generate a standalone Bun executable containing your bundled code. Implies --production
  $
I actually did double check it though because it used to be wrong. For good measure:

  $ grep bun build
  bun build --bytecode --compile --outfile bun-darwin-arm64         --production --target bun-darwin-arm64         app.ts
  bun build --bytecode --compile --outfile bun-darwin-x64           --production --target bun-darwin-x64           app.ts
  bun build --bytecode --compile --outfile bun-darwin-x64-baseline  --production --target bun-darwin-x64-baseline  app.ts
  bun build --bytecode --compile --outfile bun-linux-arm64          --production --target bun-linux-arm64          app.ts
  bun build --bytecode --compile --outfile bun-linux-arm64-musl     --production --target bun-linux-arm64-musl     app.ts
  bun build --bytecode --compile --outfile bun-linux-x64            --production --target bun-linux-x64            app.ts
  bun build --bytecode --compile --outfile bun-linux-x64-baseline   --production --target bun-linux-x64-baseline   app.ts
  bun build --bytecode --compile --outfile bun-linux-x64-modern     --production --target bun-linux-x64-modern     app.ts
  bun build --bytecode --compile --outfile bun-linux-x64-musl       --production --target bun-linux-x64-musl       app.ts
  bun build --bytecode --compile --outfile bun-windows-arm64        --production --target bun-windows-arm64        app.ts
  bun build --bytecode --compile --outfile bun-windows-x64          --production --target bun-windows-x64          app.ts
  bun build --bytecode --compile --outfile bun-windows-x64-baseline --production --target bun-windows-x64-baseline app.ts
  bun build --bytecode --compile --outfile bun-windows-x64-modern   --production --target bun-windows-x64-modern   app.ts
  $ ls -1hs bun*
   59M bun-darwin-arm64
   64M bun-darwin-x64
   64M bun-darwin-x64-baseline
   95M bun-linux-arm64
   89M bun-linux-arm64-musl
   95M bun-linux-x64
   94M bun-linux-x64-baseline
   95M bun-linux-x64-modern
   90M bun-linux-x64-musl
  107M bun-windows-arm64.exe
  110M bun-windows-x64-baseline.exe
  111M bun-windows-x64.exe
  111M bun-windows-x64-modern.exe
  $

Ideally we would still only use JavaScript on the browser, personally I don't care about about the healthy competition, rather that npm actually works when I am stuck writing server side code I didn't ask for.

FE-BE standardization is efficient in terms of labor and code migration portability, but I really like the idea of static compilation and optimization of the BE in production.. there's absolutely no need or reason for the BE to do dynamic anything in prod. As long as it retains profiling inspectability when things go wrong.

That doesn’t align with my experience. It feels more like a trojan horse. Client and Server rarely (should) share code, and people that are really good at one discipline aren’t that good at the other. Maybe LLMs will change that.

That's a negative FUD way to judge it.

C. 2015 one of my friends was a Django dev but moved to Express/node because that's where the cool kids went, it was one less language, and allowed them to move logic FE->BE and BE->FE much easier. Also, a bunch of Rails people left to Node/FE JS and Rust (BE). JS/TS is still an irreducible requirement for FE. There is no law that either grand unified frameworks must be used nor entirely separate FE and BE must be maintained separately and are somehow mysterious, arcane arts. Not sharing code when/where it is possible and appropriate is duplicating effort.. like client- and server-side input validations doing exactly the same thing.


Except we have moved beyond that with SaaS products,agents, AI workflows.

The only reason I touch JavaScript on the backend instead of .NET, Java, Go, Rust, OCaml, Haskell,.... are SDKs and customers that don't let other option than JavaScript all over the place.

Thus my point of view is that I couldn't care less about competition between JavaScript runtimes.


This (single executable) is available in node.js now too as SEA mode.

But I think it still doesn't work with ESM, only CommonJS, so while not insurmountable, not as good as bun.

SEA with node.js "works" for nearly arbitrarily general node code -- pretty much anything you can run with node. However you may have to put in substantial extra effort, e.g., using [1], and possibly more work (e.g., copying assets out or using a virtual file system).

[1] https://www.npmjs.com/package/@vercel/ncc


>I have 25 Mbps up. 10 Mbps down. Have had it for years. It's fine.

Do you mean the other way around, 25Mbps Down and 10 Mbps up?

It is nice to have, especially when it doesn't cost much. That is why I am perfectly OK with PON rather than dedicated fibre. You only need the 1 or 10Gbps speed for may be a 10 min window per month.

I do think 25Mbps on a house hold bases is quite low. On a 5Mbps Video file I want the first 10 second buffer, 50Mbps done instantly. While I am loading multiple page in the background. Multiply that with a few more user in family. It is perfectly useable a you said, if you dont mind waiting.

Otherwise I think 50 - 100Mbps per person is generally the point we see law of diminishing returns.


Yes, I reversed the up/down bandwidths as you noticed, but didn't see the mistake until I could no longer edit.

> Otherwise I think 50 - 100Mbps per person is generally the point we see law of diminishing returns.

Right. Whether we think the diminishing returns are at 10 or 20 or 50 or 100 Mbps per user, there are diminishing returns.

The vast, vast majority of residences simply do not need symmetric 25 Gbps bandwidth, and it would be a massive waste of resources to try to build out a residential network providing that level of bandwidth, rather than prioritizing universal accessibility of 50 or 100 Mbps.

I'd liken it to the overprovisioning of EV batteries, particularly in North America. Many, many car owners would be perfectly satisfied with a car with a range of only (say) 60 miles or 100 km, and overall EV cost and adoption rate is hurt by the fact that leading-edge manufacturers, especially Tesla, were only building EVs with range of 5x that.


Even in Wireless / Mobile Carrier they have company like American Tower / China Tower that shares infrastructure cost. So none of this is new, I always thought the reason it is not done is because of company interest and politics. Internet should be treated just like Electricity and Water.

There are other things we could do without completely changing the dynamics or policy. We could mandate all home leasing and selling to have Internet Speed labeled. Giving consumer the knowledge and choice. And all future new home to have at least 1Gbps Internet and upgradable to 10Gbps or higher as standard. The market will sort itself out. And give government some space and room to further negotiate terms with companies.

Now the technical question, why no sharing? why point to point? why 4 fibre and not 2 or 8? And the no sharing is a little bit of gimmick, because at the end of the day everything is shared. The backbone has 100Gbps and you cant have 10 labours all using 25Gbps. I also dont think P2P make sense in a metropolitan city like Tokyo, New York or Hong Kong, especially in high rise, ultra high density buildings with limited space. When 50G-PON barely meet demands we are looking at 100G or 200G-PON. Individual fibre is simple not feasible in those settings.


>Now that the AV2 specification is publicly available

Draft of AV2 spec. Not final. I think they just tagged the AVM 14 release from their research branch. But personally it feels it is no where near final / finish status.


On the surface this looks great. Seems to hit the sweet spot in a lot of areas.

I know it is Rust inspired, but why write it in Rust and not Go?


Because it offers things where Go today doesn’t and never will?

This comment should be pinned at the top.

>Trying to milk the last drop before the patents expire?

Old licenses are grandfathered in previous pricing. So this isn't about milking, but likely a tactic specifically aiming at certain companies. But I am wondering why they bother to do this at this stage of the game.

I am hoping we could further innovate on top of H.264 to have a better patent free video codec.


>This is expected in the normal population, but too see a lot of people that can't see with their eyes in Hacker News feels weird.

You are replying to an account created in less than 60 days.


This is a bit unfair. Hackers are born every day.

In relation to the quality of its comment. I thought it was a fair. He just completely made up about false positives.

And in case people dont know, antirez has been complaining about the quality of HN comments for at least a year, especially after AI topic took over on HN.

It is still better than lobster or other place though.


Bots too, vanderBOT!

I used to work in robotics, and can't remember the password for my usual username so I pulled this one out of thin air years ago

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: