Major in the context of Japanese is rough, I can see a significant drop in quality when interacting with the same model in say Spanish vs English
For as rich a culture the Japanese have, there's only about 1XX million speakers and the size of the text corpus really matters here, the couple billion of English speakers are also highly motivated to choose English over anything else because Lingua Franca has homefield advantage
To use LLM's efectively you have to work with knowledge of their weaknesses, Math is a good example, you'll get better results from Wolphram Alpha even for the simple things, which is expected
Broad reasoning and explanations tend to be better than overly specific topics, the more common a language, the better the response
If a topic has a billion tutorials online, an LLM has a really high chance of figuring out first try
Be smart with the context you provide, the more you actively constrain an LLM, the more likely it is to work with you
I have friends that just use it to feed class notes to generate questions and probe it for blindspots until they're satisfied, the improvements on their grade s make it seem like a good approach, but they know that just feeding responses to the LLM isn't trustworthy, so they do and then they also check by themselves, the extra time valuable by itself, if just to improve familiarity with the subject
For as rich a culture the Japanese have, there's only about 1XX million speakers and the size of the text corpus really matters here, the couple billion of English speakers are also highly motivated to choose English over anything else because Lingua Franca has homefield advantage
To use LLM's efectively you have to work with knowledge of their weaknesses, Math is a good example, you'll get better results from Wolphram Alpha even for the simple things, which is expected
Broad reasoning and explanations tend to be better than overly specific topics, the more common a language, the better the response If a topic has a billion tutorials online, an LLM has a really high chance of figuring out first try
Be smart with the context you provide, the more you actively constrain an LLM, the more likely it is to work with you I have friends that just use it to feed class notes to generate questions and probe it for blindspots until they're satisfied, the improvements on their grade s make it seem like a good approach, but they know that just feeding responses to the LLM isn't trustworthy, so they do and then they also check by themselves, the extra time valuable by itself, if just to improve familiarity with the subject