Darktrace Addresses Generative AI Concerns with Introduction of AI Models That Help Protect Data Privacy and Intellectual Property
CAMBRIDGE, England, June 12, 2023 /PRNewswire/ -- In response to growing use of generative AI tools, Darktrace today announces the launch of new risk and compliance models to help its 8,400 customers around the world address the increasing risk of IP loss and data leakage. These new risk and compliance models for Darktrace DETECT™ and RESPOND™ make it easier for customers to put guardrails in place to monitor, and when necessary, respond to activity and connections to generative AI and large language model (LLM) tools.
- This comes as Darktrace's AI observed 74% of active customer deployments have employees using generative AI tools in the workplace[1].
- In one instance, in May 2023 Darktrace detected and prevented an upload of over 1GB of data to a generative AI tool at one of its customers.
- "Since generative AI tools like ChatGPT have gone mainstream, our company is increasingly aware of how companies are being impacted.
- [1] Based on data obtained on June 2nd, 2023, from active customer deployments with Call Home enabled, where Darktrace detected generative AI activity at some point.