So, I’m quite positive that while, of course the collaborative training, the federated learning of sci-fi tuning and so on, still takes some work, just inferencing actually is no longer a difficult problem now. So, my MacBook runs quantized forms quite easily of the Vicuna, the language model and many people on their phones now can run what they call web LLM, which is in a browser a GPT 2-ish assistance. So, if you’re not aiming for this GPT-4 level intelligence but just a general level assistance that helps you plan out social networking, then I think today’s technology is actually already quite sufficient and with hardware acceleration even better in the future.