That’s how it’s supposed to be. It doesn’t often work that way, but Reset is trying to come up with the policy component to both auditing and other, more responsible algorithmic machine learning models, how do you make sure that you can stand by both of them being built on real evidence based and primitives?