And so, I would say the pressure for leading labs to deliver on democratic governance is very large. It is much higher than compared to, for example, biological virus and its defense to nuclear proliferation and so on. Because not everyone has an opinion on how those things should work, except, of course, they should not impact people or explode. But for language models, literally everyone has a different idea. And after some interactions, their ideas change, right? So, unless the alignment, the democratic governance can keep up the speed with the expectation change, no board is fit to respond to it in a here and now manner. So, yeah…