So what we will recommend is a principled approach to AI model disclosures, balancing the need and the privacy of users and recognizing the importance of protecting IP and information that contributes to industry competitiveness. For example, where it is possible and appropriate. A developer of an LLM should provide documentation outlining how the model is intended to be used, whether there are any known inappropriate users, normal risk and any recommendations or deployers and users to manage risk. Importantly, we also recommend that governments around the world support the development of international technical standards on AI, including common benchmarks and standards for AI safety evaluations. There should also be mechanism to support interoperability on AI safety testing to avoid unnecessary duplication.