This is mentioned in their site. Seems they've identified it.
>Currently, we are confirming that the output of the AI artist has been biased.
We hope to use a wide variety of learning data and increase the diversity of output in the future.
It seems like the authors are truly limited by the data here. I mean, if someone was to do a similar project of Chinese Qing dynasty portraits, you would expect a bias to Asian faces.
We have to be so conscious of bias in AI, yet in this case I wonder what the solution (if there is one) would be, given you genuinely have a biased data set to begin with.
That's almost for the best. The representation of non-European ethnic facial features / skin tones in renaissance art appeared to mostly be one of two groupings:
1) Accurate, naturalistic portrayals that almost certainly had an actual human sitting and;