Explainable artificial intelligence

Explainable AI: How to Build Next-Gen AI Models That You Can Trust | Webinar by Quantzig

Retrieved on: 
Thursday, July 21, 2022

ML models are often thought of as black boxes that are impossible to interpret and hence lack trust at many levels by business leaders.

Key Points: 
  • ML models are often thought of as black boxes that are impossible to interpret and hence lack trust at many levels by business leaders.
  • Bias in the data sets has been a long-standing risk in training AI models.
  • Explainable AI is one of the key requirements for implementing responsible AI, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability, and accountability.
  • In this webinar, our top minds will talk about what Explainable AI is and how organizations can build and deploy models at scale with utmost trust and confidence.

Comet Recognized as a Cool Vendor in AI Core Technologies by Gartner®

Retrieved on: 
Wednesday, July 20, 2022

Comet , provider of the leading development platform for machine learning (ML) teams from startup to enterprise, today announced that it was named one of the 2022 Gartner Cool Vendors in AI Core Technologies Scaling AI in the Enterprise.

Key Points: 
  • Comet , provider of the leading development platform for machine learning (ML) teams from startup to enterprise, today announced that it was named one of the 2022 Gartner Cool Vendors in AI Core Technologies Scaling AI in the Enterprise.
  • In this report, Gartner named select vendors working to address priorities around managing, governing and scaling AI initiatives across different industries.
  • We are honored to be included on the Gartner Cool Vendors in AI Core Technologies list which, we believe, is another strong outside validation for Comet being the category leader in this space, said Gideon Mendels, CEO and co-founder of Comet.
  • Comet was created to help companies reap the benefits and realize the full value from their ML investments.

Explainable AI: How to Build Next-Gen AI Models That You Can Trust | Webinar by Quantzig

Retrieved on: 
Wednesday, July 20, 2022

ML models are often thought of as black boxes that are impossible to interpret and hence lack trust at many levels by business leaders.

Key Points: 
  • ML models are often thought of as black boxes that are impossible to interpret and hence lack trust at many levels by business leaders.
  • Bias in the data sets has been a long-standing risk in training AI models.
  • Explainable AI is one of the key requirements for implementing responsible AI, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability, and accountability.
  • In this webinar, our top minds will talk about what Explainable AI is and how organizations can build and deploy models at scale with utmost trust and confidence.

Chrysalix Venture Capital Announces New Investment in Luffy AI, the Next Generation in Adaptive Intelligence for Robotics and Industrial Control

Retrieved on: 
Monday, July 18, 2022

Vancouver, BC and Delft, Netherlands, July 18, 2022 (GLOBE NEWSWIRE) -- Chrysalix Venture Capital, a global technology venture capital firm that specializes in transformational industrial innovation, announces a new investment in Luffy AI, the next generation in Adaptive Intelligence (AI) for the control and optimized performance of robotics, machine and industrial processes.

Key Points: 
  • Vancouver, BC and Delft, Netherlands, July 18, 2022 (GLOBE NEWSWIRE) -- Chrysalix Venture Capital, a global technology venture capital firm that specializes in transformational industrial innovation, announces a new investment in Luffy AI, the next generation in Adaptive Intelligence (AI) for the control and optimized performance of robotics, machine and industrial processes.
  • Luffys technology can be applied to dynamic, non-linear control challenges for both industrial machines and large industrial processes.
  • We are very excited to be working with Chrysalix Venture Capital, said Dr. Matthew Carr, Co-founder & CEO of Luffy AI.
  • Chrysalix has deep expertise in bringing disruptive industrial technology to market, which makes them the perfect partner for our adaptive AI technology.

Explainable AI: Deploy AI with Trust and Confidence | Webinar by Quantzig

Retrieved on: 
Wednesday, July 13, 2022

ML models are often thought of as black boxes that are impossible to interpret and hence lack trust at many levels by business leaders.

Key Points: 
  • ML models are often thought of as black boxes that are impossible to interpret and hence lack trust at many levels by business leaders.
  • Bias in the data sets has been a long-standing risk in training AI models.
  • Explainable AI is one of the key requirements for implementing responsible AI, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability, and accountability.
  • In this webinar, our top minds will talk about what Explainable AI is and how organizations can build and deploy models at scale with utmost trust and confidence.

DataRobot Named a Leader in AI/ML Platforms by Independent Research Firm

Retrieved on: 
Tuesday, July 12, 2022

DataRobot , an innovator in AI, today announced it has been named a Leader in The Forrester Wave: AI/ML Platforms, Q3 2022 .

Key Points: 
  • DataRobot , an innovator in AI, today announced it has been named a Leader in The Forrester Wave: AI/ML Platforms, Q3 2022 .
  • DataRobot rises swiftly to meet enterprise teams where they want to be, according to the Forrester Wave report.
  • The report goes on to state that DataRobot has strengths in its tooling and functionality in data preparation, model evaluation and explanation, ModelOps, and application building.
  • "DataRobot is honored to be named a leader in AI/ML Platforms.

Explainable AI: Deploy AI with Trust and Confidence | Webinar by Quantzig

Retrieved on: 
Tuesday, July 12, 2022

ML models are often thought of as black boxes that are impossible to interpret and hence lack trust at many levels by business leaders.

Key Points: 
  • ML models are often thought of as black boxes that are impossible to interpret and hence lack trust at many levels by business leaders.
  • Bias in the data sets has been a long-standing risk in training AI models.
  • Explainable AI is one of the key requirements for implementing responsible AI, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability, and accountability.
  • In this webinar, our top minds will talk about what Explainable AI is and how organizations can build and deploy models at scale with utmost trust and confidence.

TruEra Named to Fintech Power 50 List of the World’s Top Fintech Trailblazers

Retrieved on: 
Monday, July 11, 2022

More than 1,200 companies were nominated, and the list was then narrowed down to just 50 honorees.

Key Points: 
  • More than 1,200 companies were nominated, and the list was then narrowed down to just 50 honorees.
  • TruEra stood out for its innovative approach to ensuring AI quality and governance, and for the measurable impact its making for its Fortune 500 clients, said Mark Walker, Co-founder and COO of the Fintech Power 50.
  • Were looking forward to working alongside TruEra in the coming year, helping to show how its solutions can advance AI initiatives at financial services companies.
  • In March 2022, the company was named to the Fast Company World Changing Ideas list for the second year in a row.

CalypsoAI Recognized as a Cool Vendor in AI Core Technologies by Gartner®

Retrieved on: 
Wednesday, July 6, 2022

SAN FRANCISCO, July 6, 2022 /PRNewswire/ -- CalypsoAI, the leader in building trust in artificial intelligence (AI) through independent testing and validation, announced that it has been named a 2022 Gartner® Cool Vendor in the "Cool Vendors™ for AI Core Technologies–Scaling AI in the Enterprise."

Key Points: 
  • SAN FRANCISCO, July 6, 2022 /PRNewswire/ -- CalypsoAI, the leader in building trust in artificial intelligence (AI) through independent testing and validation, announced that it has been named a 2022 Gartner Cool Vendor in the "Cool Vendors for AI Core TechnologiesScaling AI in the Enterprise."
  • According to the Gartner report, "two in five organizations that are already using AI have had an AI privacy breach or security incident.
  • Attacks on AI systems have already been widely documented, and many organizations have not yet invested in tools to secure their AI.
  • Following this recognition, Neil Serebryany, CalypsoAI CEO, said, "We are thrilled to be recognized as a Cool Vendor in AI by Gartner.

Responsible Computing Holds Inaugural Meeting

Retrieved on: 
Wednesday, July 6, 2022

BOSTON, MA, July 06, 2022 (GLOBE NEWSWIRE) -- Today Responsible Computing (RC), a program of Object Management Group, announced it held its inaugural meeting on Wednesday, June 29, 2022.

Key Points: 
  • BOSTON, MA, July 06, 2022 (GLOBE NEWSWIRE) -- Today Responsible Computing (RC), a program of Object Management Group, announced it held its inaugural meeting on Wednesday, June 29, 2022.
  • Responsible Computing is a new consortium comprising technology innovators working together to address sustainable development goals.
  • In this first meeting, Steering Committee members and Responsible Computing executives discussed the focus of each of the six working groups.
  • Responsible Computing (RC) is a membership consortium for technology organizations that provides a framework for setting responsible corporate policies.