So in the social media era, I've often thought that two of the best reforms we could implement to combat its ills are to 1) publish algorithms so we know how big tech companies prioritize the information they deliver to us, and therefore introduce a measure of accountability, and then 2) cut a path towards allowing users to implement/swap out different algorithms. So Facebook can still be Facebook, but I could say that I want to see more original posts from my friends than rando engagement bait.
I wonder if something like that could work with regards to how LLMs are trained and released.
People have already noted in the comments that bias is kind of unavoidable and a really hard problem to solve. So wouldn't the solution be 1) more transparency about biases and 2) ways to engage with different models that have different biases?
EDIT: I'll expand on this a bit. The idea of an "unbiased newspaper" has always been largely fiction: bias is a spectrum and journalistic practices can encourage fairness but there will always be biases in what gets researched and written about. The solution is to know that when you open the NYT or the WSJ you're getting different editorial interests, and not restricting access to either of them. Make the biases known and do what you can to allow different biases to have a voice.
I wonder if something like that could work with regards to how LLMs are trained and released.
People have already noted in the comments that bias is kind of unavoidable and a really hard problem to solve. So wouldn't the solution be 1) more transparency about biases and 2) ways to engage with different models that have different biases?
EDIT: I'll expand on this a bit. The idea of an "unbiased newspaper" has always been largely fiction: bias is a spectrum and journalistic practices can encourage fairness but there will always be biases in what gets researched and written about. The solution is to know that when you open the NYT or the WSJ you're getting different editorial interests, and not restricting access to either of them. Make the biases known and do what you can to allow different biases to have a voice.