Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really like these tools. Yesterday I gave it a filename for a video of my infant daughter eating which I took while I had my phone on the charger. The top of the charger slightly obscured the video.

I told it to crop the video to just her and remove the obscured portion and that I had ffmpeg and imagemagick installed and it looked at the video, found the crop dimensions, then ran ffmpeg and I had a video of her all cleaned up! Marvelous experience.

My only complaint is that sometimes I want high speed. Unfortunately Cerebras and Groq don't seem to have APIs that are compatible enough for someone to have put them into Charm Crush or anything. But I can't wait for that.



You could try to use a router. I'm currently building this:

https://github.com/grafbase/nexus/

If croq talks openai API, you enable the anthropic protocol, and openai provider with a base url to croq. Set ANTHROPIC_BASE_URL to the open endpoint and start claude.

I haven't tested croq yet, but this could be an interesting use case...


I assumed that OpenRouter wouldn't deliver the same tokens/second which seems to have been a complete mistake. I should have tried it to see. I currently use `ANTHROPIC_BASE_URL` and `ANTHROPIC_AUTH_TOKEN` with z.ai and it works well but CC 2.0 now displays a warning

      Auth conflict: Both a token (ANTHROPIC_AUTH_TOKEN) and an API key (/login managed key) are set. This may lead to unexpected behavior.
    • Trying to use ANTHROPIC_AUTH_TOKEN? claude /logout
    • Trying to use /login managed key? Unset the ANTHROPIC_AUTH_TOKEN environment variable.
Probably just another flag to find.

EDIT: For anyone coming here from elsewhere, Crush from Charm supports Cerebras/Groq natively!


However, after a day of using Crush with Qwen-3-480B-coder I am disappointed and will be canceling my Cerebras subscription. The model + agent pair is substantially worse than Claude Code with Sonnet 4 and I am going to have return to the latter. Qwen-3 in my workflow requires a lot of handholding and review and the gains from rapid generation are ruined by the number of errors in generated code.

Crush is also not a good assistant. It does not integrate scrollback with iTerm2 so I can't look at what the assistant did. The pane that shows the diff side by side is cool but in practice I want to go see the diff + reasoning afterwards so I can alter sections of it more easily and I can't do that.


Isn't cropping a video something you can do in the photos app in 2 seconds?


yeah, removing the unwanted item and keeping the video uncropped is surely more desirable, but far beyond the capabilities of "ai"


Maybe I'm misunderstanding, but it seems like you're just talking about AI inpainting. That's like one of the first things people did with image diffusion technology. NVIDIA published a research paper on it back in 2018: https://arxiv.org/abs/1804.07723

Inpainting is harder on videos than on images, but there are plenty of models that can do it. Google's Veo 3 can remove objects from videos: https://deepmind.google/models/veo/


I simply did not know you could do that with videos. TIL!


I got a laugh out of this. Using an LLM to crop a video does feel like dropping a nuke to hammer in a nail.


It isn't even the worst I've done. I've dumped a table in ChatGPT and asked it to CSVize it and do some trivial operations on the table. This is straightforward to do in Google Sheets. It is very much like that: boiling an ocean to get some tea.


Cerebras has OpenAI compatible "Qwen Code" support. ~4000 tokens/s. Qwen code's 480B param model (MoE) that's quite good. Not quite sonnet good, but speed is amazing.

https://www.cerebras.ai/blog/introducing-cerebras-code


When they announced this I went to try it and they only work with Cline really (which is what they promote there) but Cline has this VSCode dependency as far as I know and I don't really like that. I have my IDE flow and my CLI flow and I don't want to mix them.

But you're right, they have an OpenAI compatible API https://inference-docs.cerebras.ai/resources/openai so perhaps I can actually use this in the CLI! Thanks for making me take another look.

EDIT: Woah, Charm supports this natively. This is great. I am going to try this now.


I'm using Cerebras's MCP with Claude Code and it works mostly OK. CC doesn't send updates through the MCP by default (as far as I can tell) so I have to add an instruction to CLAUDE.md to tell it to always send code creation and updates through the Cerebras MCP, which works pretty well.


This is an interesting idea. Since I have the subscription for the rest of a month, I'll give it a crack. Wasn't impressed by the Qwen-3 model, though.


Cerebras is super cool. I wish OpenAI and Anthropic would have their models hosted there. But I guess supporting yet another platform is hard.


Cline extension can use Grok, in fact I think it's free at the moment. I tried Claude Code and Cline for similar tasks and found Claude Code incredibly expensive but not better, so I've been sticking with Cline and switching between APIs depending on what model currently has the vest price/performance going on.


Claude Code with the Max plan is significantly cheaper for full-time use.


This isn't my experience at all.


I wish the Cline extension was more performant. It has a 1000+ ms startup time for VScode and stutters occasionally. In terms of workflow though, it's my absolute favorite. I simply don't think the models are there yet for fully agentic coding in any reasonably complex/novel codebase. Cline lets me supervise the LLM step by step.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: