Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

CodeGemma has fewer parameters than Llama3, so it absolutely should not be slower. That sounds like a configuration issue.

Meta originally released Llama2 and CodeLlama, and CodeLlama vastly improved on Llama2 for coding tasks. Llama3-8B is okay at coding, but I think CodeGemma-1.1-7b-it is significantly better than Llama3-8B-Instruct, and possibly a little better than Llama3-70B-Instruct, so there is plenty of room for Meta to improve Llama3 in that regard.

> Was there anything official from Meta?

https://ai.meta.com/blog/meta-llama-3/

"The text-based models we are releasing today are the first in the Llama 3 collection of models."

Just a hint that they will be releasing more models in the same family, and CodeLlama3 seems like a given to me.



I suppose it could be quantization issue, but both are done by lmstudio-community. Llama3 does have a different architecture and bigger tokenizer which might explain it.


You should try ollama and see what happens. On the same hardware, with the same q8_0 quantization on both models, I'm seeing 77 tokens/s with Llama3-8B and 72 tokens/s with CodeGemma-7B, which is a very surprising result to me, but they are still very similar in performance.


You're right, ollama does perform the same on both models. Thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: