After a year of sustained enthusiasm, the inevitable correction has arrived. It’s a mild one (for now), as the market adjusts the stock prices of major players like Nvidia, Microsoft, and Google, while others reassess their strategies and shift priorities. Gartner refers to this phase as the “trough of disillusionment,” where interest fades and implementations fall short of delivering the anticipated breakthroughs. This is the point where some technology producers falter or fail, and investment continues only if the surviving providers can enhance their products to meet the expectations of early adopters.
It’s important to recognize that this was always expected: the post-human revolution promised by AI advocates was never a realistic goal, and the initial excitement around early large language models (LLMs) wasn’t driven by market success.
AI Is Here to Stay
So, what’s next for AI? If it follows Gartner’s hype cycle, the current downturn will be followed by a “slope of enlightenment,” where the technology matures, its benefits become clearer, and vendors introduce second and third-generation products. Ideally, this leads to the “plateau of productivity,” where mainstream adoption occurs, driven by the technology’s broad market appeal. However, Gartner emphasizes that not every technology recovers after the crash, and the key is for the product to find its market fit quickly enough.
At present, it seems certain that AI is here to stay. Companies like Apple and Google are launching consumer products that package the technology into smaller, more user-friendly applications (like photo editing, text editing, and advanced search). While the quality of these products varies, it appears that some players have successfully productized generative AI in a way that is meaningful for both consumers and their own bottom line.
What Has LLM Done for Us?
Where does this leave enterprise customers, particularly in cybersecurity? Generative AI still has significant drawbacks that hinder its widespread adoption. One major challenge is its inherently non-deterministic nature. Since generative AI is based on probabilistic models (a feature, not a bug), there will be variations in output. This unpredictability can be unsettling for industry veterans accustomed to traditional software behaviors. As a result, generative AI is not a simple drop-in replacement for existing tools but rather an enhancement to them. Nonetheless, it has the potential to be a valuable layer in a multi-layered defense, making it harder for attackers to predict.
Another hurdle is the high cost of AI. Training models is expensive, and this cost is currently being passed on to consumers. Consequently, there’s a strong focus on reducing the per-query cost. Advances in hardware and breakthroughs in model refinement are expected to significantly decrease the energy consumption of AI models, and there’s hope that text-based output, in particular, will become profitable.
While cheaper and more accurate models are promising, integrating these models into organizational workflows remains a significant challenge. Society lacks the experience to efficiently incorporate AI technologies into daily work practices, and there’s also the question of how the existing workforce will adapt to these new tools. For example, studies have shown that human workers and customers sometimes prefer interacting with AI models that prioritize explainability over accuracy. A March 2024 study by Harvard Medical School found inconsistent results when radiologists used AI assistance, with some improving and others performing worse. The recommendation is to introduce AI tools into clinical practice cautiously, with a personalized and carefully calibrated approach to ensure optimal patient outcomes.
Regarding market fit, while generative AI may never replace programmers (despite some companies’ claims), AI-assisted code generation has become a useful prototyping tool for various scenarios. This capability is already proving valuable to cybersecurity specialists: generated code or configurations serve as a good starting point for building something quickly before refining it.
However, there is a significant caveat: this technology can accelerate the work of seasoned professionals who can quickly debug and fix the generated output. But it can be potentially disastrous for less experienced users, as there is always a risk of generating unsafe configurations or insecure code, which could compromise an organization’s cybersecurity if deployed in production. Like any tool, generative AI is useful when used correctly but can lead to negative outcomes if not.
A key characteristic of current generative AI tools is their deceptively confident tone when presenting results. Even if the generated content is blatantly wrong, these tools deliver it with such assurance that novice users can be easily misled. Remember, the computer might be “lying” about how certain it is, and it can be very wrong.
Another effective use case is customer support, specifically for level 1 support – helping customers who don’t bother reading the manual or FAQs. Modern chatbots can handle simple questions and route more complex queries to higher levels of support. While this approach isn’t ideal for customer experience, it can result in significant cost savings for large organizations with many untrained users.
The uncertainty surrounding AI integration in businesses has created a boon for the management consulting industry. For instance, Boston Consulting Group now generates 20% of its revenue from AI-related projects, while McKinsey anticipates that AI projects will account for 40% of its revenue this year. Other consultancies like IBM and Accenture are also on board. These projects vary widely, from simplifying ad translation between languages to enhancing search capabilities for procurement and creating hardened customer service chatbots that avoid hallucination and include references to sources for added trustworthiness. At ING, for example, only 200 out of 5,000 customer queries go through the chatbot, but this number is expected to rise as response quality improves. Much like the evolution of internet search, we might reach a tipping point where asking the bot becomes a reflexive action rather than sifting through data manually.
AI Governance Must Address Cybersecurity Concerns
Regardless of specific use cases, the new AI tools introduce a host of cybersecurity challenges. Similar to Robotic Process Automation (RPA) in the past, customer-facing chatbots require machine identities with appropriate, sometimes privileged, access to corporate systems. For instance, a chatbot might need to identify a customer and retrieve records from the CRM system – a scenario that should raise alarms for Identity and Access Management (IAM) experts. Implementing precise access controls around this experimental technology will be crucial.
The same principle applies to code generation tools used in DevOps processes: setting the correct access controls to the code repository will minimize the impact if something goes wrong and limit the damage if the AI tool itself becomes a cybersecurity liability.
And then there’s the issue of third-party risk: by adopting a powerful but not yet fully understood tool, organizations expose themselves to adversaries who might probe the limits of LLM technology. The relative immaturity of the field could be problematic since we don’t yet have best practices for securing LLMs. Therefore, ensuring that these tools do not have write privileges in sensitive areas is essential.
AI Opportunities in IAM
AI use cases and opportunities in access control and IAM are beginning to take shape, with products now being delivered to customers. Traditional machine learning applications like role mining and entitlement recommendations are being reimagined in light of modern methods and user interfaces (UIs). Role creation and evolution are becoming more integrated into out-of-the-box governance workflows. Innovations like peer group analysis, decision recommendations, and behavior-driven governance are increasingly standard in Identity Governance. Customers now expect enforcement point technologies like Single Sign-On (SSO) Access Management systems and Privileged Account Management systems to offer AI-powered anomaly and threat detection based on user behavior and sessions.
Natural language interfaces are also improving user experience across all these IAM solutions by enabling interactive exchanges with the system. While static reports and dashboards are still necessary, the ability for individuals with different responsibilities to express themselves in natural language and refine search results interactively reduces the skills and training needed to derive value from these systems.
This Is the End of the Beginning
One thing is certain: whatever the state of AI technology in mid-2024, it’s not the end of the road for the field. Generative AI and LLMs are just one aspect of AI, with many other AI-related areas making rapid progress, fueled by advances in hardware and substantial government and private research funding.
As AI continues to evolve, security professionals need to assess how generative AI can enhance their defenses, understand the potential vulnerabilities it introduces, and develop strategies to contain the risks if something goes wrong.
About the Author:
This insightful article is contributed by Robert Byrne, Field Strategist at One Identity. Rob has over 15 years of experience in IT, specializing in identity management. He has held various roles in development, consulting, and technical sales, and has worked with companies like Oracle and Sun Microsystems. Rob holds a Bachelor of Science degree in mathematics and computing.