I think you had compatibility of Llama models, but I thought that ODI already had an OpenAI shim running, the API server. So it’s not really a priority. The priority is mostly just a reproducible v1 deployment shared with ODI so that we can start porting the good old parts to light. And that was just a clarification. And then the talk to clusters stuff, I think as long as there’s good OpenAPI, people can just rediscover that using SillyTavern or whatever.