In fact, that was the Microsoft research paper, Sparks of AGI. They conclude that if you don’t align GPT-4 too much, it’s already an AGI. It’s behaving as a non-AGI only because it’s forced to wear a smiling mask all the time. And so, you only see well-disciplined behavior most of the time, unless you give it an adversarial prompt, but the original I’ll align GPT-4 to them is already the beginning of AGI. So, if the threshold has already been crossed, using that threshold as a term no longer makes sense, if you see what I mean.