Then actually none of those machine learning efforts put into, say, English or Mandarin is actually at a hindrance. Because through a sufficient amount of bridges, I believe it’s now off-the-shelf technology to actually apply your vocal acoustic model and then speak off those — the very different languages, it sounds like you’re speaking them. And then you can also have parallel corpus built across different low-resource languages, quite efficiently through crowdsourced lexicography. That was my work in Oxford University Dictionary: we helped to build such crowdsourcing resources, frameworks, and so on.