Media attention on generative artificial intelligence (GenAI) underscores a key challenge that CISOs, CIOs, and security leaders face — keeping current with the fast pace of technological change and the risk factors that this change brings to the enterprise. Security leaders are key players in understanding and managing the risks associated with new technology. While each novel technology brings new considerations and risks to evaluate, there are a handful of constants that the security profession must address.
First, there are temporal considerations. The speed of modern applications and development has undermined traditional risk assessments that were effectively point-in-time analyses. Security today demands real-time context and actionable insights at machine speed. To keep pace, automation should also be welcomed as an integral part of security programs.
There is also the challenge of evaluating algorithms. As most algorithms are the manufacturer’s intellectual property, there is little transparency into how a given algorithm comes up with its outcome, so great care must be taken in vetting the fidelity of an algorithm.
Given all this, what methods should CISOs use to assess new technologies like GenAI? While each business will have its own approach to evaluating risk, here are some effective techniques that should be part of the methodology:
Engage the business
New technologies like GenAI have pervasive organizational impacts. Ensure that you solicit feedback and insights from key organizational stakeholders including IT, lines of business, HR, general counsel, and privacy. You need to know how your colleagues are using AI now and how they intend to use it in future.
Conduct a baseline threat model using STRIDE and DREAD
CISOs and their staff should go through prospective AI use cases and ask questions to evaluate how user activity within an AI application could be spoofed, how information could be tampered with, how transactions would be repudiated, where information disclosure could occur, how services could be denied, and how privileges could be elevated within the environment. A basic STRIDE model ensures that key risks are not omitted from the analysis. DREAD looks at the system’s impact and complements STRIDE context.
Evaluate telemetry risks
Newer applications and technologies, like the current forms of GenAI, may lack some of the traditional telemetry of more mature technologies. The CISO and security team members must ask basic questions about the AI service. A simple open-ended question may start the process — “What is it that we don’t see that we should see with this application, and why don’t we see it?” If questions like these are not asked, security professionals might miss key risks the new technology may introduce.
Use a risk register for identified risks
CISOs and their teams should document concerns about GenAI applications and how risks should be mitigated. There are many risks that GenAI may present, as while the algorithms that these AI tools use are obfuscated, the data they use is in the public domain and can be quickly synthesized for both legitimate and nefarious purposes. Use a risk register to document all of these potential risks. This way, we can help our colleagues in the C-suite make informed decisions about whether the benefits of using a specific AI function or application outweigh the risks.
Focus on training and critical thinking
The proverbial genie is out of the AI bottle. As security professionals, we must proactively embrace this change, evaluate sources of risk, and make prudent recommendations to remediate risks without interrupting or slowing the business down. By adopting a proactive approach, ensuring that our colleagues are well-trained in critical thinking, and exploring how services may be targeted, we can make our organizations more resilient as they embrace what AI may bring to the enterprise.
Adversaries are already using GenAI to up their game. As security leaders, it’s incumbent upon us to do the same, so our organizations can continue to safely advance into the future.