Hacker Newsnew | past | comments | ask | show | jobs | submit | brcmthrowaway's commentslogin

Can someone explain this whole boondoggle with RTOS and latency means in practice?

Why would someone make a context switch HIGH latency? That defeats the purpose.


Not sure what you are referring to, no one said that?

Does there exist a camera that can zoom into a single person from this distance?

Nope, not today that can be easily brought in space. Plus the atmosphere interfering.

What clearance level do you have?

Relevant XKCD "what if?" [0] is relevant.

[0] - https://what-if.xkcd.com/32/


How does LMStudio compare to Unsloth Studio?

What an epic takedown.

Microsoft should have promoted this guy instead of laying him off.

Did Microsoft really lose OpenAI as a customer?


The answer to your question is in the public releases. MS went from primary partner (under ROFR) to one of the options. They retain IP rights and API hosting, although in recent weeks we learned that OpenAI was planning a workaround with AWS and Microsoft said they might sue them for that. So the happy marriage is over, it’s more like a custody battle now: https://www.reuters.com/technology/microsoft-weighs-legal-ac...

It was way worse before AI

How does laser communication work with a moving object with 9DoF?!

Apparently with a gimbal and some fast-moving mirrors.

https://www.ll.mit.edu/news/lincoln-laboratory-laser-communi...


It also helps that laser beams diverge. By the time it gets back to Earth, the diameter of the beam from Artemis is probably several hundred meters, if not several kilometres. Their aim still needs to be fairly precise, but they're not trying to hit a lens with a beam that's still the width of a pencil. They really just need to paint the neighbourhood that NASA's sensors are located in.

6 km ([slide show](https://ntrs.nasa.gov/api/citations/20250009875/downloads/Op...) with data points and the worst slides government agencies were able to create)

I was wondering about this too! I did not know how they can aim a laser from so far at a moving spaceship.

I generated this visual map about to help me understand it - https://vectree.io/c/aiming-space-lasers-gimbals-and-beam-di...


Just like this, a Starlink gimbal being tested for future third party laser comms: https://www.youtube.com/watch?v=dpFfC9WY0qs

Doesnt look very precise.

Any remember the 90s software Terragen and Vue3d?

I loved playing around with KPT Bryce so much. IDK if it ever was used productively, but it was so ahead of its time.


As a kid who spent many hours on Vista on the Amiga 500, this has blown my mind

Wrong context

Ah, the eternal handwave for anything the AI doesn't do well - it must be user error.

No. Aside from just making an algorithm that didn’t even run, it refused to use an MCP that it had registered in the same context session.

Wow. I doubt Anthropic can raise that. Are they more efficient, can they do with less?

Given how all of Big Tech (except Google obviously) is going all in on Claude Code, I wouldn't be surprised if Anthropic becomes profitable first.

Anthropic doesn't have anything else other than the Claude models.

But notice that no-one, not a single mention of Deepseek tells me that they are preparing to scare everyone again. Which is why Dario continues to scare-monger on local models.

Sometimes you do not need hundreds of billions of dollars for inference when it can be done locally with efficient software; and Google proved that. But where is the money in that? So continues the flawed belief in infinitely buying GPUs to scale which Nvidia needs you to do.

Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.


> Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.

Can confirm. Kimi K2.5 is pretty intelligent and most of the time there's no difference between Opus and Kimi.


Local models just make no economic sense since the GPU will idle 99% of the time.

You have a GPU already (at least an iGPU and an NPU on most newer platforms) as part of your computer, might as well get some use out of it with local inference. And trying to do inference on a larger model with an undersized GPU will have you idling a lot less than 99% - but that still makes a lot of sense for most casual users who will only rarely need a genuine "Pro" class answer from AI. Doing that locally is way less hassle than paying for a subscription or messing with API spend.

False on a team that’s distributed

How does Dune3D compare to FreeCAD?

>> How does Dune3D compare to FreeCAD?

Dune3D is more like Solvespace with a few improvements and bug fixes vs being anywhere near FreeCAD in terms of capability. Improvements include using STEP files in assemblies and having some ability to make Fillets or Chamfers. Bugs fixes would be due to using OCCT for NURBS surfaces - solvespace frequently fails with NURBS boolean operations.

As for overall capability, FreeCAD does everything these others do but also supports lofting and other modeling options, BIM for architecture, I think it does pre- and post- processing for FEA, and maybe some other "big tool" things.


It takes 10× as long to sketch in it compared to SolveSpace. At least for me.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: