Hacker Newsnew | past | comments | ask | show | jobs | submit | deskamess's commentslogin

For a long time I thought my RTX-2060 was just not capable and the other day I did a ffmpeg GPU transcode and was surprised by how well it did. So now I am thinking about putting on some of Google's new Gemma edge models (probably the smallest will work with my 6GB VRAM + 2 GB) setup. I am not a 100% sure what that 2GB is but I think it is borrowing from the system in some manner.

Video encoding uses dedicated silicon, it's not using the card's compute.

I like the Claude desktop interface. The color scheme, presentation, fonts, etc. Is there a CSS I can find for the desktop version - I assume it's using some kind of web rendering engine and CSS is part of it.

Is there a link to US/Canada retailers?

Edit: Never mind. I always find it after asking a question.


uv has been very useful but I also looking at pixi. Anyone have any experience with that? I hear good things about it.


Can highly recommend pixi. It really is the "uv but for Conda" and actually quite a bit more imo. Don't know how relevant this is for you, but many packages like PyTorch are not being built for Intel Macs anymore or some packages like OpenCV are built such that they require macOS 13+. That's usually not too much of a problem on your most likely pretty modern dev machine but when shipping software to customers you want to support old machines. In the Conda ecosystem, stuff is being built for a wide variety of platforms, much more than the wheels you'd find on pypi.org so that can be very useful.

So I can really recommend you trying pixi. You can even mix dependencies from pypi.org with the ones from Conda so for most cases you really get the best of both worlds. Also the maintainers are really nice and responsive on their Discord server.


I had no idea it was AI assisted (as another comment put it). However I am fine with this... I would certainly enhance my long form content like the author described. The author mentioned the use of world bible and style guides, and it shows through in the consistency and tightness of the article. And that is key... to take something AI generated (based on a prompt) and rework it systematically in an iterative human-in-the loop. The end result was a great read.


Are there any restrictions on how short the error_slug should be? The meat of some of my errors can be pretty long (for example an ffmpeg error). There are also many phases to a job - call them tasks. Can a canonical log line be a collection of task log lines?


You should avoid dumping the raw error entirely. The idea is that error_slug is a stable grouping key.

The idea is to consolidate all that can be grouped into one logical unit. So you would do one long log line at the end, after all tasks are done.


I see one of your responses that this is a complement to an existing logging system - a one line summary. That works for me.


Can you use handy exclusive via the cli if you have a file to feed it?


Not sure about that


Not currently


Do they have a good multilingual embedding model? Ideally, with a decent context size like 16/32K. I think Qwen has one at 32K. Even the Gemma contexts are pretty small (8K).


It is still prescribed for epilepsy. I am actually hoping for some medication stories if anyone/someone they know has ADHD and epilepsy. It's for a juvenile, but your stories can be for any age. Or pointers to any resources about the combo.


This makes 50 cookies. I think they are too small (tsp scoop on baking sheet). That's the only mod I would make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: