Also, from a general AI safety perspective it seems prudent to retain that sort of control there. Maybe if we’re really confident that a model is not going to be dangerous or be destabilizing, then it’s fine at some point to release it in a totally open way. We want to be cautious and thoughtful about staging that and how we release it.

Keyboard shortcuts

j previous speech k next speech