And so to iteratively get those feedbacks and then based on those feedbacks, to fine tune that model to be more attuned to their population. And so ideally, I would say you can do this every day, maybe in a chatgpt-like interface, if somebody feels that chatgpt has wronged them in terms of biases and other stuff. Currently, if you tell chatgpt, you will apologize profusely and then make the same mistake immediately again in the next session.