The 33 vs 66 is in reference to the percentage chance that a prediction is correct. But if you have no way to tell if a given prediction is correct or not without doing the tests that you were trying to avoid in the first place then it's not really worthwhile except for perhaps some exploratory research.
> But if you have no way to tell if a given prediction is correct or not without doing the tests that you were trying to avoid in the first place then it's not really worthwhile except for perhaps some exploratory research.
This isn't really how it works.
To quote the CASP competition organisers:
The organizers even worried DeepMind may have been cheating somehow. So Lupas set a special challenge: a membrane protein from a species of archaea, an ancient group of microbes. For 10 years, his research team tried every trick in the book to get an x-ray crystal structure of the protein. “We couldn’t solve it.”
But AlphaFold had no trouble. It returned a detailed image of a three-part protein with two long helical arms in the middle. The model enabled Lupas and his colleagues to make sense of their x-ray data; within half an hour, they had fit their experimental results to AlphaFold’s predicted structure. “It’s almost perfect,” Lupas says. “They could not possibly have cheated on this. I don’t know how they do it.”[1]
So you have experimental results, but still don't know how it folds. You aren't trying to avoid the all the experiments, just understand them.
I'm not really sure what your point is.