Protects organisations from accidental misuse of AI tools like OpenAI ChatGPT
ExtraHop, a leader in cloud-native network detection and response (NDR), today announced it has released a new capability that offers organisations visibility into employees’ use of AI as a Service (AIaaS) and generative AI tools, like OpenAI ChatGPT. Organisations can now benefit from a better understanding of their risk exposure and whether or not these tools are being used in adherence with AI policies.
As generative AI and AIaaS are increasingly adopted within enterprise settings, C-level executives are concerned that proprietary data and other sensitive information are being shared with these services. While AIaaS offers productivity improvements across a range of industries, organisations must be able to audit employee use – and potential misuse – of these tools to protect against the accidental exposure of confidential data.
“Organisations using AIaaS solutions run the risk of employees sharing proprietary data, leading to the loss of IP and customer data,” said Chris Kissel, Research Vice President of Security Products, IDC. “ExtraHop is addressing this risk to the enterprise by giving customers a mechanism to audit compliance and help avoid the loss of IP. With its strong and rich background in network intelligence, ExtraHop can provide unparalleled visibility into the flow of data related to generative AI.”
To help determine whether sensitive data may be at risk, ExtraHop offers customers visibility into devices and users on their networks that are connecting to external AIaaS domains, the amount of data employees are sharing with these services, and in some cases, the type of data and individual files that are being shared.
“Customers have expressed a real concern about employees sending proprietary data and other sensitive information into AI services, and until today, there has been no good way to assess the scope of this problem,” said Patrick Dennis, CEO, ExtraHop. “Amid the proliferation of AIaaS, it’s extremely important that we give customers the tools they need to see what is happening across the network, what data is being shared, and what could be at risk. With this new capability, our goal is to ensure that they can reap the wide-ranging benefits of generative AI while still maintaining data protections.”
Mark Bowling, Chief Risk and Information Security Officer, ExtraHop, added, “The use of generative AI, perhaps more accurately large language models, can pose serious cybersecurity challenges for multiple reasons if used incorrectly. While most of those challenges should be addressed internally at most organisations, based on their own confidentiality requirements, there is an overarching concern that tools like Grammarly, ChatGPT, and others introduce the possibility of a leak of confidential information. As a CISO, I want and need visibility into these tools; it is perhaps the most valuable capability for an organisation as we navigate the waters of these new, smart technologies.
Reveal(x) and Reveal(x) 360 from ExtraHop now have the functional capability to identify and monitor when an organisation’s users, either with or without appropriate authorisation, are accessing AIaaS. This visibility provides my team with the ability to protect my organisation’s critical information, whether it is critical intellectual property such as source code, non-public information such as investment or financial data, or even protected health information (PHI). Quite simply, ExtraHop now helps me protect against that possibility.”