The inference is done to the encrypted query that the model owner knows nothing about. And then they give me the response that I can then homomorphically decrypt our split learning or many of those new things. Because if we haven’t tested these things, we cannot in the future make demands to you, right, to other AI labs, knowing that this kind of privacy preserving, or at least power symmetry preserving, the ways of using generative AI models actually makes sense.