Will the AI bubble burst in 2025?
“In 2025, we expect the industry to pull back on the promises, investment, and hype of new AI capabilities and settle down into what is real versus marketing noise,” commented Morey J Haber, Chief Security Advisor at BeyondTrust.
Despite AI being present in applications for more than a decade, the capabilities of generative AI that was showcased by OpenAI’s ChatGPT seemed to have kickstarted a tech revolution among organizations around the world. While businesses are aware that AI would improve productivity and efficiency, the technology’s capabilities which was unveiled through ChatGPT made businesses rethink their approach.
Generative AI applications, which is the next step in AI deployment for organizations, begin making a stronger appeal for business leaders. Over the past two years, organizations around the world have been investing in generative AI applications with IDC predicting generative AI investments projected to reach US$110 billion in the Asia Pacific region by 2028.
Today, generative AI can be applied to almost every component of an organization. There are generative AI applications to improve customer service as well as applications in financial services. Data centers are also seeing an increasing trend of generative AI being used for better management. Developers are also being able to rely on generative AI to code for them and cybersecurity professionals are relying on generative AI to not only provide them reports but also predict and prevent potential threats.
Interestingly, BeyondTrust believes that while AI still has a lot to offer to industries, the world might actually be reaching an AI breaking point in 2025. According to Morey J Haber, Chief Security Advisor at BeyondTrust, the artificial inflation of AI has already peaked in 2024, and the bubble could burst across multiple verticals in 2025.
For Haber, while some of the promises of AI have come true, and technology will continue to impress with its capabilities, AI-based technologies have largely failed to live up to the mountainous hype. To be specific, Haber believes terms like AI-enabled or AI-driven are overused and inappropriately promised for some solutions. He feels that these terms will continue to take on more negative connotations that could actually hurt marketing of the product or capability with which it’s associated.
“In 2025, we expect the industry to pull back on the promises, investment, and hype of new AI capabilities and settle down into what is real versus marketing noise. We’ll see narrow AI (not Artificial General Intelligence–this is decades out, at best guess) settle into industry use as a tool angled for basic security and AI workflows. Some examples might include automating the creation of products, streamlining supply chain workflows, and reducing the complexity and skill level needed to perform certain tasks, based on security best practices outlined by models like ATLAS from MITRE,” said Haber.
At the same time, Haber also believes that there will be increased cyberattacks that leverage AI because of the technology’s low barrier of entry. In Asia Pacific, ransomware is still a big problem. However, ransomware now has a double-edged sword of double extortion.
“Ransomware has evolved double extortion to maximize the funds that can come out of an organization. You pay the ransom, they still have the data, now you're telling them not to leak the data. The second part of that is how ransomware or any malware attack is being done. Cybercriminals are using AI to launch social engineering and phishing attacks that are voice or text based,” said Haber.
The AI momentum in cybersecurity
The use of AI in cybersecurity is shaping rather uniquely in Asia Pacific. Haber explained that while concepts like zero trust are gaining momentum in the region, many organizations still see the difference between Zero Trust, Zero Trust architecture and Zero Trust-enabled products.
“Zero Trust is not a product. Most of the conversations I've had with customers have focused on Zero Trust workflows. For example, how to pick a workflow and Zero Trust-enable it, and then what are the tools that support it. I think this is a great first step for the region, especially in terms of understanding risks, because when you run Zero Trust workflows for remote access, remote employees, contractors and such, you can solve a lot of the phishing and ransomware problems,” explained Haber.
From a BeyondTrust perspective, Haber pointed out that they have focused on technology that “doesn't sell ransomware for ransomware”. Put simply, ransomware is a computer virus and if the virus doesn't have permissions to run, it can't execute, therefore can't infect the system.
“When organizations embrace concepts like least privilege, which is a zero-trust principle, ransomware is stopped right out of the gate. There are reports that have come out over the years, and it hovers between 87% and 88% of ransomware is blocked just by least privilege. The other 13% generally use lay-on-land attacks without admin rights.
However, when you have such a high percentage of common ransomware blocked by just removing admin rights, which is what BeyondTrust does, you can build your model to say, I can help you get to zero-trust or I can help you block ransomware or I can increase or lower your security posture with just one concept versus trying to target a tool and product just for something like ransomware,” said Haber.
Looking at AI in cybersecurity, Haber commented that AI solutions today have a couple of very fundamental pieces in the way they're being implemented for business. The first is a realization that it's not going to replace the human being. AI in cybersecurity is not designed to minimize head count or reduce the need for a person to do threat hunting or something else.
“There is a lot of misinformation out there about optimization where people think if I can optimize using AI, I don't need as many people. That's not the truth. It's more about minimizing risk or identifying new threats,” said Haber.
He explained more, stating that when there is an AI that is able to detect something very obtuse in terms of an attack vector, organizations can potentially minimize the dwell time. They can find something necessary or a persistent threat that they didn't see before.
For Haber, AI is really positioned in cybersecurity solutions today to help shrink the attack surface by identifying things early. However, there is still a need for human professionals behind the scenes to do the work.
“If you rely on AI, the automation, which some vendors are claiming, you potentially get into that scenario where AI just locks you out. And we've seen that with automation tools in the past,” he added.
This is also why Haber feels it's important to understand AI guidelines. The guidelines are designed to help companies and people understand the risks of when that information is left out there, or used inappropriately, or even how to develop applications, they don't fall into that niche.