For us, a lot of the bad behavior happens when people are trying to use the models to create GPTs to do those things that I just outlined, which we don’t allow. But also there’s of course people who are using our API, or just ChatGPT to generate pieces of content. So we have various teams that are trying to enforce against those, but one important thing that we’re doing, I know there are dialogues happening between OpenAI and platform companies like Meta and Google to make sure that we do our part in terms of ensuring that we can minimize the creation of bad content or deceptive content. We also need to partner with the platform companies to ensure that we communicate when we find that kind of behavior and also that when you know the piece of content that is undesirable slipped through that we can also have a way to monitor if that gets spread and has really high prevalence, even if it’s a very low rate of actual creation of content that would be in violation.