Hacker Newsnew | past | comments | ask | show | jobs | submit | gkfasdfasdf's commentslogin

Can you elaborate? What is it about Haskell that makes it better?

Very advanced type system which allows to move a lot of program correctness to typing system. So basically if your program compiles, it probably works.

It's also has GC which makes it better suited for most programs, compared to Rust with its manual memory management.


Rust does not have manual memory management, and its type system also has the property that if your program compiles it probably works, IME.

I hear this about both Haskell and Rust, and yet, when I tried both in the former I wrote a useless program because I didn't handle state (and yet passed all tests!) while in the latter I immediately wrote a deadlock.

So...yeah.


How did your tests pass if you didn't handle state?

Because it is also possible to write tests that don't adequately capture real-life requirements.

It was an MQTT server, and the tests basically went "if we have these subscriptions, then...", but no subscriptions ever got actually stored by the server.


It is still possible to write bugs in both Haskell and Rust.

Yes, that's my point. I'm replying to claims that "if it compiles it probably works". My limited experience with both is "nah".

I prefer the slogan without "probably", "If it compiles it works", because then at least it's clear it's a slogan and not a formal claim. Everyone knows that if you write

    multiply x y = x + y
then it will compile but not work, so they don't take it literally. But it is a pithy statement of the lived experience of many users of strongly typed programming, which is more accurately described by something like "if it compiles then it will probably do something at least basically sensible and often be pretty close to what you actually wanted".

Purely functional code is easier to test because of its referential transparency and lack of shared state.

Haskell is also nice because of quickcheck.


TIL Nix flakes work on macos - is this a legit alternative to homebrew?


Yes. It's great. Especially paired with nix-darwin which allows you to declaratively manage all your macos settings too


Sort of.

For things that run on Linux and other Unices yes.

For macOS UI programs and those that need specific permissions and for commercial programs stick with Homebrew but you can define what you want in homebrew in nix.


Using nix-darwin to manage brew declaratively feels like using a jackhammer to nail a picture to the wall, but I can’t live without it anymore.


Interesting trick about the constant value, and thank you for the detailed write up!


Is it possible to use Fil-C as a replacement for valgrind/address sanitizer/leak sanitizer? I.e. say I have a C program that does manual memory management already. Can I then compile it with Fil-C and have it panic/assert on heap use after free, uninitialized memory read (including stack), array out of bounds read, etc?


Does Fil-C catch uninitialized memory reads?


malloc'd memory is zeroed in fil-c:

> *zgc_alloc*

> Allocate count bytes of zero-initialized memory. May allocate slightly more than count, based on the runtime's minalign (which is currently 16).

> This is a GC allocation, so freeing it is optional. Also, if you free it and then use it, your program is guaranteed to panic.

> libc's malloc just forwards to this. There is no difference between calling malloc and zgc_alloc.

from https://fil-c.org/stdfil


> image-manip squash: This is the key to reclaiming disk space and the core of our strategy to squash the image layers. The tool creates a temporary container, applies all 272 layers in sequence to an empty root filesystem, and then exports the final, merged filesystem as a single new layer. This flattens the image's bloated history into a lean, optimized final state.

Wouldn't a multistage Dockerfile have accomplished the same thing? smth like

FROM bigimage

RUN rm bigfile

FROM scratch

COPY --from=0 / /


I think yep, pretty much. Maybe they didn't know this existed?


Given that I was using multi-layers in 2020, when I finally got involved in projects with Docker, five years is already some time to learn about this stuff, and I bet is is much older, not bothering to look it up.


What were you using for RAG? Did you build your own or some off the shelf solution (e.g. openwebui)


I used pg vector chunking on paragraphs. For the answers I saved in a flat text file and then parsed to what I needed.

For parsing and vectorizing of the GCP docs I used a Python script. For reading each quiz question, getting a text embedding and submitting to an LLM, I used Spring AI.

It was all roll your own.

But like I stated in my original post I deleted it without backup or vcs. It was the wrong directory that I deleted. Rookie mistake for which I know better.


You can specify python version requirements in the comment, as the standard describes


Looks like the homebrew formula is one behind the latest, 3.0.0 vs 3.0.1 on your site - is the homebrew formula maintained by you or someone else?


That’s not me. I was quite surprised - in a good way - to find it there.


or even use caller to print a full backtrace: https://news.ycombinator.com/item?id=44636927


That's neat, but if your bash script needs a backtrace it should not be a bash script. To each their own though.


My bash scripts don't "need" a backtrace, but it sure is nicer than not having one


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: