So, this solves a problem whereas our adversaries are paid professionals working nine to six on information manipulation, but the community fact-checkers were not, and so they only got to clarify it when they have time. But now, with language models automatically provide a clarifying context, the community can focus on training that model, not manually replying to each and every viral disinformation. So, I think language models can also be used for the defense for the blue team, but the prerequisite is that we share the information and threat indicators with everyone around the world.