Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there any reason to explicitly train for role reversal? Can't you instead swap the input labels on any instruct tuned LLM? The model is trained on both sides of the chat log either way, right?


No. Most of the time loss is only calculated on the model response tokens, not the user input tokens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: