The concerns surrounding the safety and security of generative AI are particularly pressing. In this context, I support concepts like Anthropic’s idea of ‘buffering’. This involves evaluating societal vulnerabilities — be it biohazards, cyber interference, or other risks — that emerging frontier models might exploit. If such a model exhibits a fraction of potential harm, say one-sixth, we must consider the possibility that malevolent actors may possess the remaining capability, which we might be unaware of.