In strict mathematical reading, maybe - depends on how you define "rules", "defined" and "solely" :P. Fortunately, legal language is more straightforward like than that.
The obvious straightforward read is along the lines of: imagine you make some software, which then does something bad, and you end up in court defending yourself with an argument along the lines of, "I didn't explicitly make it do it, this behavior was a possible outcome (i.e. not a bug) but wasn't something we intended or could've reasonably predicted" -- if that argument has a chance of holding water, then the system in question does not fall under the exception your quoted.
The overall point seems to be to make sure systems that can cause harm always have humans that can be held accountable. Software where it's possible to trace the bad outcome back to specific decisions made by specific people who should've known better is OK. Software that's adaptive to the point it can do harm "on its own" and leaves no one but "the system" to blame is not allowed in those applications.
The obvious straightforward read is along the lines of: imagine you make some software, which then does something bad, and you end up in court defending yourself with an argument along the lines of, "I didn't explicitly make it do it, this behavior was a possible outcome (i.e. not a bug) but wasn't something we intended or could've reasonably predicted" -- if that argument has a chance of holding water, then the system in question does not fall under the exception your quoted.
The overall point seems to be to make sure systems that can cause harm always have humans that can be held accountable. Software where it's possible to trace the bad outcome back to specific decisions made by specific people who should've known better is OK. Software that's adaptive to the point it can do harm "on its own" and leaves no one but "the system" to blame is not allowed in those applications.