Hacker Newsnew | past | comments | ask | show | jobs | submit | kkukshtel's commentslogin


sounds awesome but too bad it is impossible to figure out how to actually use these models and what I have to pay for/where

Something great about this release as well is the release of "file-based apps"

https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...

Basically, you can now write scripts in C# without the ceremony of a solution or project file — writing some code in a cs file and running `dotnet run myFile.cs`will execute the file directly.

You can also shebang to make it directly executable!

Hoping this inspires more people to give C# a go — it's incredible these days. Come in, the water is fine.


That's how I learned C in the 80s. Just compile the C file into an EXE. It's a good way to get started.

That said, I'm certain you've always been able to simple compile a .cs to an .exe? When I ran guerilla C# programming classes in jail, I couldn't get anything from the outside, so I was stuck with the .Net v2 csc.exe which is squirreled away in a subfolder of Windows on a default install of Visa.

What .Net 10 adds though is the ability to even scrap main() and just write code like it was Basic.


You've needed to have a project file in the past to compile .cs files, and this gets rid of that need. There are things that are part of more esoteric corners of Roslyn like .csx files that have allowed similar behavior in the past, but this fronts .cs directly as a scripting solution.

Scraping main() has been a thing for a while in dotnet — so called "Top-level programs" have be in since C# 9/.NET 5, aka about 5 years ago.

https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...


Right, I think .Net 7 brought the top-level statements to Program.cs.

The oldest version of .Net I could find on Windows 11 was .Net 4, but it still compiles just great without a project file since it has since v1:

https://www.youtube.com/watch?v=KmIwGxcMOLg


I don't think C# really has bloat — there is generally very little overlap between things they add, and each release they don't add a lot. This release's big thing was better extension method syntax and the ability to use "field" in properties. Each release is about that big, and I feel like the language is largely very easy to internalize and work in.

New features are often more likely to be semantic sugar instead of some new big thing.


This was my take as well — this is just pose estimation from generated stereo panoramic images.


Incredibly disappointing release, especially for a company with so much talent and capital.

Looking at the worlds generated here https://marble.worldlabs.ai/ it looks a lot more like they are just doing image generation for a multiview stereo 360 panoramas and then reprojecting that into space. The generations exhibit all the same image artifacts that come from this type of scanning/reconstruction work, all the same data shadow artifacts, etc.

This is more of a glorified image generator, a far cry from a "world model".


To be fair, multiview-consistent diffusion is extremely hard - it's an accomplishment of it's own right to get right, and still very useful. "World model" is probably a misnomer though (what even is a world model?). Their recent work on frame gen models is probably a bit closer to an actual world model in the traditional sense (https://www.worldlabs.ai/blog/rtfm).


They have $230m in funding and some of the best CS/AI researchers in the world. People like Skybox labs have already released stuff that is effectively the same as this with far less capital and resources. This is THE premiere world model company, and the fact their first release is a far cry from the promise here feels like a bit of a bellweather.

I agree RTFM is in more of the "right" direction here, and what is presented here is a bit of a derivative of that. Which makes this release so much more crass, as it seems like a ploy to get platform buy in from users more so than a release of a "world model".

https://www.skyboxai.net/ https://worldgen.github.io/


Yeah, I'm likewise a bit underwhelmed by the results.

If you go in with the expectation that you give it a single image and it's doing gaussian splatting from a single image and a prompt it's phenomenal. If you deviate too far from the image viewpoint it breaks down, but it looks decent long enough to be very usable. But if you go in with the expectation that it's generating "worlds" it's not very good. This only passes as a world in a 20 second tech demo where the user isn't given camera controls

My best guess is that they are forced (by investors, lack of investors, fear of the AI bubble, or whatever) to release something, and this was something they could polish up to production quality and host with reasonable GPU resources


I assume this is definitely the case, with a drive to create platform economics on their sharing platform so that there is platform lock-in when any better thing releases. This is more of a platform launch than any notable model launch imo.


I've thought about this a lot, and it comes down ultimately to context size. Programming languages themselves are sort of a "compression technique" for assembly code. Current models even at the high end (1M context windows) do not have near enough workable context to be effective at writing even trivial programs in binary or assembly. For simple instructions sure, but for now the compression of languages (or DSLs) is a context efficiency.


Wouldn't all binaries be in the training data, rather than the context? And output context could be in pieces, with something concatenating the pieces into a working binary?

ChatGPT claims its possible, but not allowed due to OpenAI safety rules: https://chatgpt.com/share/68fb0a76-6bf8-800c-82f7-605ff9ca22...


The space is being actively developed but it also requires you know where to look.

This is a journal that tracks a lot of new abstract stuff: https://www.abstractgames.org/

Notable recent releases:

https://boardgamegeek.com/boardgame/352238/turncoats

https://boardgamegeek.com/boardgame/2655/hive

https://boardgamegeek.com/boardgame/272380/shobu

https://boardgamegeek.com/boardgame/430875/high-tide


This is incredible, bravo.


From one hand-pixeled isometric dev (https://store.steampowered.com/app/690370/Cantata/) to another, congrats!

People really discount the complexity of doing isometric — it's a surprisingly nuanced thing to implement, especially if you want inner-tile sorting/depth, tiles that occlude others, etc.

For sorting for me I ended up using geometry shaders with fixed layers to basically emit objects on top of each other and render everything in one pass. It makes things like the editor and runtime incredible fast, which looks like you did as well! Happy to see more games with this style, I think the look is unbeatable.


Thanks! Sorting whole sprites is definitely a complicated problem and it took some time to arrive at the solution I use today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: