<p><br> <span class="small">May 07, 2026</span></p>
How AI could open up cybersecurity to a wider workforce
<p><b>By shifting the focus from memorizing arcane tools to overall strategy, AI will make the practice more inclusive—and more effective.</b></p>
<p><a href="https://www.weforum.org/stories/2026/05/how-ai-could-open-up-cybersecurity-to-a-wider-workforce/" target="_blank"><i>Previously published</i></a><i> by the World Economic Forum in May 2026.</i></p> <p>For decades, cybersecurity has been defined by its complexity. To work in the field meant mastering a thicket of proprietary tools, obscure query languages and vendor‑specific workflows. Expertise meant not only a grasp of security principles but knowing which menu hid which setting across dozens of platforms.</p> <p>The practice of cybersecurity became a kind of priesthood: a highly specialized discipline accessible only to those who spent years learning its rituals.</p> <p>Artificial intelligence (AI) is poised to dismantle this model by making cybersecurity far less arcane. AI is emerging as a powerful abstraction layer that lets people express their security intent in natural language while the system translates the wish into technical action. I believe this shift has the potential to democratize cybersecurity in ways the industry has never experienced.</p> <h4>The roots of the talent crisis</h4> <p>We’re facing a major global shortage of cybersecurity professionals; estimates from <a href="https://www.isc2.org/Insights/2025/12/2025-ISC2-Cybersecurity-Workforce-Study" target="_blank">ISC2’s 2025 Cybersecurity Workforce Study</a> show millions more workers are needed in the field—and that needed <i>skills</i> are in even greater demand. Organizations with significant skills gaps pay millions more per breach, and small businesses increasingly say a major cyberattack could put them out of business entirely.</p> <p>It would be one thing if this shortage were simply a matter of too few people choosing cybersecurity careers, but that’s not the case. The industry has created a structural barrier by fragmenting itself into an overwhelming maze of tools and specializations. <a href="https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/unified-cybersecurity-platform" target="_blank">One IBM report</a> finds the average security operations center manages 83 different tools from nearly 29 vendors. Each tool comes with its own interface, its own configuration language and its own “alert” taxonomy. This is not a security architecture; it’s an archaeological dig! Small wonder that in the report, 52% of executives identify complexity as the single biggest impediment to their cybersecurity operations.</p> <p>The result is a workforce problem rooted not in a lack of interest, but in a system that demands mastery of dozens of unrelated technical dialects. That is to say, the barrier to entry isn’t understanding security; it is navigating all those tools.</p> <h4>AI as the great abstraction layer</h4> <p>This is where AI changes everything. Natural language interfaces, large language models and agentic systems have begun to eliminate the need for people to memorize the mechanics of every tool in the stack. Instead of writing a query in a specialized language, for example, a security analyst can simply describe what they want to investigate. And rather than navigating through nested menus to configure a firewall rule, an administrator can tell the AI what the goal is, then let the system generate the correct configuration.</p> <p>This moves the complexity away from the human tool interface and into the machine threat interface, where AI can operate at machine speed. It allows people to focus on understanding the security problem rather than the syntax required to express it. This kind of abstraction does not emerge accidentally; it is the result of deliberate system design by AI builders creating platforms to translate human intent into machine action at scale.</p> <h4>A democratizing shift</h4> <p>Where, you may ask, does “democratizing” come in? Three dynamics make the AI transformation more than just a productivity boost. Organizations that work with an experienced AI builder are the ones most likely to fully experience and benefit from these dynamics.</p> <ul> <li><b>Natural language collapses the learning curve.</b> Today, becoming a competent analyst requires learning not just security concepts but the idiosyncrasies of dozens of tools. This creates an artificial barrier: someone may understand how to hunt for threats but lack the procedural knowledge to express that intent in a specific system. When the interface becomes natural language, the barrier shifts from tool fluency to conceptual understanding. This opens the field to a much wider range of people, including those who may not have traditional technical backgrounds.<br> <br> </li> <li><b>Embedded intelligence redistributes expertise.</b> AI systems increasingly encode the judgment of seasoned practitioners. They learn what normal behavior looks like, what suspicious patterns resemble and what best‑practice configurations should be. This allows organizations with limited resources to access a level of expertise that previously required large, specialized teams. Consider one of the small businesses I mentioned that could go under if hit by a cyberattack. Going forward, such an enterprise will gain the ability to interact with an AI‑powered security platform using the same language (and receiving the same quality of insight) as a global enterprise.<br> <br> </li> <li><b>Platform consolidation reduces the knowledge burden.</b> As AI enables platforms to do more, the number of discrete tools will shrink. Fewer platforms shrinks the volume of knowledge required to operate a security program. This not only simplifies operations but lowers the barrier for new entrants who no longer need to learn a sprawling ecosystem of point solutions.</li> </ul> <h4>Risks remain, but the potential is enormous</h4> <p>Cybersecurity has never been easy—and that’s an understatement. Let me stress that AI-fueled security and democratization don’t mean the threat landscape becomes simpler. For one thing, we’ve learned in short order that the same AI that empowers defenders empowers attackers. An improperly configured AI agent is, in effect, an always on, highly privileged digital employee—one that can be manipulated if not governed carefully.</p> <p>There is also the risk of over reliance. If most organizations depend on a small number of AI driven security platforms, a vulnerability in one could create systemic risk across sectors and geographies.</p> <p>Far from making cybersecurity a matter of typing commands in text boxes, AI is shifting where the complexity lives. Instead of requiring practitioners to memorize the inner workings of dozens of tools, it allows them to focus on judgment, risk, and strategy—areas in which human insight remains irreplaceable.</p> <p>If managed responsibly, this change will open the field to a broader, more diverse workforce and give organizations of all sizes access to capabilities once reserved for the few. The priesthood is ending not because cybersecurity is easier or less important, but because it’s becoming more human centered, more accessible and ultimately more resilient.</p>
<p>Vishal leads Cognizant’s global cybersecurity strategy, strengthens threat protection capabilities and advances digital trust across client enterprises. Under his leadership, Cognizant is scaling its cybersecurity offerings to meet the evolving needs of global organizations, with a focus on resilience, regulatory alignment and secure digital transformation.</p>