Let’s see. Robotics problems were claimed to be tractable through AI. However, a large majority of robotics solutions today are 90% derived through control systems (which follow some degree of causal analysis) followed by AI to optimize the last few percentages if possible.
Finding something odd for an algorithm (especially a deep neural network) is hard because they fail in just so many ways. For example, lenet for mnist almost always gives high confidence predictions for random tensors(torch.randn). Most imagenet models fail in the presence of just 20-30% salt and pepper noise. (Both of these are problems solvable through simple preprocessing techniques)
Not to mention the fact that most models are trained without a background class and tend to give overconfident predictions on out of distribution samples.
Finding something odd for an algorithm (especially a deep neural network) is hard because they fail in just so many ways. For example, lenet for mnist almost always gives high confidence predictions for random tensors(torch.randn). Most imagenet models fail in the presence of just 20-30% salt and pepper noise. (Both of these are problems solvable through simple preprocessing techniques)