At Google, we believe that there should be a risk-based approach that builds upon sector-specific guidance. Let me talk about some of our recent efforts to help users navigate content that is AI-generated. First, about content labels on YouTube, we have been a strong supporter of disclosure in appropriate context. As we announced in December last year, we believe it is important for the healthy ecosystem on YouTube that we help inform viewers when they are engaging with content that’s made with generative AI. The first step of this announcement started in March 2024 when we launched a new tool in YouTube Creator Studio to enable creators to mark their content that is made with altered or synthetic media, including generative AI. So what does this mean? This means that creators will be required to disclose the content when it is too realistic that viewers can easily mistake what’s being shown with a real person, real place or real event, and the label will appear within the video description.