We don’t allow builders to create chatbots that pretend to be real people, such as political candidates, or celebrities, or institutions like governments. We don’t allow applications that deter people from participating in democratic processes. And we have teams that focus on also enforcing these usage policies to monitor what’s happening, what’s being built and what people are using these technologies for, so that we can figure out how to stop bad behaviors, how we can build technologies and classifiers to detect them more automatically. And also, with our new GPTs, we allow people who are using the GPTs to flag and report potential violations to us.