Given these safety concerns, we’re establishing an AI evaluation center. This center will focus on inspecting how AI models might confabulate or perpetrate epistemic injustices. By identifying and halting these patterns, we aim to foster more honest AI interactions.