AI safety

MLCommons and AI Verify to collaborate on AI Safety Initiative

Retrieved on: 
fredag, maj 31, 2024

Today in Singapore, MLCommons® and AI Verify signed a memorandum of intent to collaborate on developing a set of common safety testing benchmarks for generative AI models for the betterment of AI safety globally.

Key Points: 
  • Today in Singapore, MLCommons® and AI Verify signed a memorandum of intent to collaborate on developing a set of common safety testing benchmarks for generative AI models for the betterment of AI safety globally.
  • “There is significant interest in the generative AI community globally to develop a common approach towards generative AI safety evaluations,” said Peter Mattson, MLCommons President and AI Safety working group co-chair.
  • “The MLCommons AI Verify collaboration is a step-forward towards creating a global and inclusive standard for AI safety testing, with benchmarks designed to address safety risks across diverse contexts, languages, cultures, and value systems.”
    The MLCommons AI Safety working group, a global group of academic researchers, industry technical experts, policy and standards representatives, and civil society advocates recently announced a v0.5 AI Safety benchmark proof of concept (POC).
  • The AI Safety working group encourages global participation to help shape the v1.0 AI Safety benchmark suite and beyond.

RSA Conference Closes Out 33rd Annual Event by Discovering the Art of What's Possible Together

Retrieved on: 
fredag, maj 10, 2024

SAN FRANCISCO, May 10, 2024 /PRNewswire/ -- RSA Conference™, the world's leading cybersecurity conferences and expositions, today concluded its 33rd annual event at the Moscone Center in San Francisco.

Key Points: 
  • SAN FRANCISCO, May 10, 2024 /PRNewswire/ -- RSA Conference™, the world's leading cybersecurity conferences and expositions, today concluded its 33rd annual event at the Moscone Center in San Francisco.
  • "It's been so exciting as always, to watch our community convene in San Francisco for RSA Conference and this year was no exception," said Linda Gray Martin, Senior Vice President, RSA Conference.
  • RSA Conference 2024 by the Numbers:
    41,000+ attendees, including 650 speakers across 425 sessions, 400+ media members, and 600 exhibitors on the expo floors.
  • RSA Conference 2024 highlights include:
    Reality Defender named "RSA Conference 2024's Most Innovative Startup" by the Innovation Sandbox's judges' panel comprised of technology, venture, and security industry thought leaders.

Cloud Security Alliance Releases Three Papers Offering Guidance for Successful Artificial Intelligence (AI) Implementation

Retrieved on: 
måndag, maj 6, 2024

“Thought leadership is the guiding force in the evolution of AI applications, shaping the trajectory of innovation and steering it towards ethical and impactful outcomes.

Key Points: 
  • “Thought leadership is the guiding force in the evolution of AI applications, shaping the trajectory of innovation and steering it towards ethical and impactful outcomes.
  • Reports such as these reinforce CSA’s 15 years of cloud security leadership and going forward as thought leaders for one of the most consequential technologies of our lifetime,” said Jim Reavis, CEO and co-founder, Cloud Security Alliance.
  • "Our mission is to create practical and sensible frameworks and guidance for enterprise security teams on AI.
  • This is the first part of many of these deliverables to come in doing just that,” said Caleb Sima, Chair, CSA AI Safety Initiative.

Bugcrowd Introduces AI Penetration Testing to Improve Customers' Confidence in AI Adoption

Retrieved on: 
onsdag, maj 1, 2024

SAN FRANCISCO, May 1, 2024 /PRNewswire/ -- Bugcrowd, the leader in crowdsourced security, today introduced the availability of its AI Pen Testing on the Bugcrowd Platform to help AI adopters detect common security flaws before threat actors take advantage. AI Pen Testing is now part of  Bugcrowd's AI Safety and Security Solutions portfolio, in addition to the recently announced AI Bias Assessment offering.

Key Points: 
  • SAN FRANCISCO, May 1, 2024 /PRNewswire/ -- Bugcrowd , the leader in crowdsourced security, today introduced the availability of its AI Pen Testing on the Bugcrowd Platform to help AI adopters detect common security flaws before threat actors take advantage.
  • AI Pen Testing is now part of  Bugcrowd's AI Safety and Security Solutions portfolio , in addition to the recently announced AI Bias Assessment offering.
  • AI also presents new categories of potential security vulnerabilities, as reflected in President Biden's Executive Order 14110 that calls for "AI red teaming" (methods unspecified) by all government agencies.
  • To learn how the Bugcrowd Platform can equip your organization to protect itself from AI risk, visit Bugcrowd.com or download The Ultimate Guide to AI Security .

LatticeFlow AI Joins the U.S. AI Safety Institute Consortium

Retrieved on: 
tisdag, april 23, 2024

LatticeFlow AI , the leading platform empowering Artificial Intelligence (AI) teams to build performant, safe, and trustworthy AI solutions, proudly announces that it has joined the U.S. AI Safety Institute Consortium (AISIC).

Key Points: 
  • LatticeFlow AI , the leading platform empowering Artificial Intelligence (AI) teams to build performant, safe, and trustworthy AI solutions, proudly announces that it has joined the U.S. AI Safety Institute Consortium (AISIC).
  • Dave Henry , SVP of Business Development at LatticeFlow AI, added: “AI safety programs are interdisciplinary in nature, requiring a broad range of management and technical skills to execute.
  • LatticeFlow AI’s Commitment to the U.S. and NIST’s AI Safety Institute Consortium
    With its contributions to AISIC, LatticeFlow AI will continue its commitment to helping U.S. government agencies such as the U.S. Army ensure the safety and trustworthiness of mission-critical AI systems.
  • - If you are interested in conducting an AI assessment, book a meeting with a LatticeFlow AI expert.

AIShield Announces Groundbreaking AI Security Platform SecureAIx with Global Strategic Partners at GISEC 2024

Retrieved on: 
tisdag, april 23, 2024

DUBAI, UAE, April 23, 2024 /PRNewswire-PRWeb/ -- In a landmark announcement at GISEC 2024, AIShield, a Bosch startup recognized by Gartner for its pioneering AI cybersecurity technology, unveiled a series of significant innovations and strategic partnerships poised to redefine the landscape of artificial intelligence security.

Key Points: 
  • AIShield is excited to unveil SecureAIx – a Unified AI Security Platform at GISEC 2024.
  • This marks a strategic pivot towards platformization and consolidation in AI security, providing enterprises with an overarching view and control over their AI security posture, underpinned by the voice of our customers and growing global market demand for the same.
  • The following are some key releases and components:
    SecureAIx - Unified AI Security Platform (from ML to GenAI systems, encompassing development to deployment to operation and monitoring): The launch of SecureAIx Platform emerges as the cornerstone of our AI security product and innovation strategy and leadership ( read the Press Release on launch of SecureAIx platform ).
  • AIShield invites you to visit our booth (P54, Hall 6) at GISEC 2024 to witness the capabilities of the SecureAIx platform firsthand and discuss how our solutions can enhance your organization's AI safety and security posture.

AIShield Unveils SecureAIx - Unified AI Security Platform at GISEC 2024

Retrieved on: 
tisdag, april 23, 2024

DUBAI, UAE, April 23, 2024 /PRNewswire-PRWeb/ -- In a significant leap forward for AI cybersecurity, AIShield, a Bosch startup recognized by Gartner for AI Application Security, proudly announces the launch of its Unified AI Security Platform – SecureAIx at GISEC 2024 in Dubai. This announcement marks a pivotal moment, showcasing AIShield's commitment to providing comprehensive and streamlined AI security solutions with end-to-end solution transformation. The platform will feature significant upgrades informed by customer feedback, which we will showcase at GISEC Dubai in 2024. Additionally, we plan to reveal our product integrations at this premier event, highlighting the seamless integration of the platform with the existing cybersecurity tech stack. In summary, AIShield's endeavor to bring AI security capabilities under a single umbrella is driven by 2 key drivers: the preference of our customers for platformization and consolidation, and to bring unparalleled value to our customers in facilitating strong collaboration between security and development teams bolstering MLSecOps and LLMSecOps adoption.

Key Points: 
  • Empowering the Future of AI Security: AIShield's SecureAIx Delivers Pioneering AI Protection and Integration
    DUBAI, UAE, April 23, 2024 /PRNewswire-PRWeb/ -- In a significant leap forward for AI cybersecurity, AIShield, a Bosch startup recognized by Gartner for AI Application Security, proudly announces the launch of its Unified AI Security Platform – SecureAIx at GISEC 2024 in Dubai.
  • Transitioning from previously segmented security solutions for classical ML and Generative AI, we are introducing a single, comprehensive AI Security Platform designed to meet all AI security requirements for enterprises bringing unparalleled visibility of AI security posture to security and development teams from production to deployment.
  • SecureAIx is a comprehensive AI security platform designed to protect enterprise AI/ML models, applications, and workloads across various stages of development and operation (MLOps/LLMOps).
  • Advanced AI Security for avoiding any surprises: With 45+ patents and extensive attack coverage, the platform ensures protection against AI security threats.

AIShield Unveils Professional Services for Delivering End-to-End AI Security Solutions under SecureAIx Platform

Retrieved on: 
tisdag, april 23, 2024

DUBAI, UAE, April 23, 2024 /PRNewswire-PRWeb/ -- AIShield, the Gartner-recognized Bosch startup acclaimed for its AI security platform SecureAIx, proudly introduces its Professional Services portfolio. With an unwavering commitment to providing cutting-edge technology and comprehensive support, AIShield continues to lead the industry in safeguarding AI systems against emerging threats and ensuring AI safety and security to the highest standards.

Key Points: 
  • DUBAI, UAE, April 23, 2024 /PRNewswire-PRWeb/ -- AIShield, the Gartner-recognized Bosch startup acclaimed for its AI security platform SecureAIx, proudly introduces its Professional Services portfolio.
  • Key modules of SecureAIx, AIShield's Unified AI Security Platform include:
    Watchtower: This module safeguards the AI/ML supply chain, addressing potential vulnerabilities from the earliest stages.
  • AIShield Implementation Services: Enablement of seamless incorporation of AIShield's SecureAIx Platform into organizations' AI ecosystem to elevate security and operational efficiency.
  • With the expansion of its Professional Services division, AIShield reaffirms its dedication to delivering unparalleled support and value to clients worldwide.

H2O.ai Inaugurates GenAI World for Public Sector to Spur Growth and Innovation of Generative AI for Government Departments and Agencies

Retrieved on: 
tisdag, februari 27, 2024

H2O.ai, the open source leader in Generative AI and machine learning, continues its world tour of bringing GenAI World conferences to new sectors to further democratize AI.

Key Points: 
  • H2O.ai, the open source leader in Generative AI and machine learning, continues its world tour of bringing GenAI World conferences to new sectors to further democratize AI.
  • An Overview of the NIST AI Risk Management Framework by Patrick Hall, Professor for AI Risk, The George Washington University.
  • These training sessions will cover GenAI use cases using public sector data, LLM benchmarking and evaluation best practices, GenAI interpretability, governance frameworks and model validation.
  • Founded in 2012, H2O.ai is at the forefront of the AI movement to democratize Generative AI.

More Than 300 International Experts Release Open Letter Demanding Government Leaders Take Immediate Action to Combat Deepfake Threats

Retrieved on: 
onsdag, februari 21, 2024

Current laws do not adequately target and limit deepfake production and dissemination, and existing requirements on creators are ineffective.

Key Points: 
  • Current laws do not adequately target and limit deepfake production and dissemination, and existing requirements on creators are ineffective.
  • Establishing Criminal Penalties for any individual who knowingly creates or facilitates the spread of harmful deepfakes.
  • "We need immediate action to combat the proliferation of deepfakes, and my colleagues and I created this letter as a way for people around the world to show their support for law-making efforts to stop deepfakes."
  • Criminalizing deepfake child pornography is the least we can do to protect the dignity of children now and for generations to come.