MLCommons and AI Verify to collaborate on AI Safety Initiative
Today in Singapore, MLCommons® and AI Verify signed a memorandum of intent to collaborate on developing a set of common safety testing benchmarks for generative AI models for the betterment of AI safety globally.
- Today in Singapore, MLCommons® and AI Verify signed a memorandum of intent to collaborate on developing a set of common safety testing benchmarks for generative AI models for the betterment of AI safety globally.
- “There is significant interest in the generative AI community globally to develop a common approach towards generative AI safety evaluations,” said Peter Mattson, MLCommons President and AI Safety working group co-chair.
- “The MLCommons AI Verify collaboration is a step-forward towards creating a global and inclusive standard for AI safety testing, with benchmarks designed to address safety risks across diverse contexts, languages, cultures, and value systems.”
The MLCommons AI Safety working group, a global group of academic researchers, industry technical experts, policy and standards representatives, and civil society advocates recently announced a v0.5 AI Safety benchmark proof of concept (POC). - The AI Safety working group encourages global participation to help shape the v1.0 AI Safety benchmark suite and beyond.