This is not saying that Ai will not develop this capability. This is merely observing the fact that most of the machine learning algorithms that are in use today is not designed with explain-ability as its main design criteria. People essentially have to perform forensics to uncover how its reasoning went or didn’t go after it even happened.