Arturo Mercado

AI Education: The New Defense That the Channel Cannot Ignore

07/12/2025

On a bright December morning, Melonia da Gama, Training Director at Fortinet, was in her office reviewing the Global Cybersecurity Skills Gap Report 2025. Upon reading that 87% of cybersecurity professionals expect AI to enhance their roles, she thought, “Perfect, adoption is going in the right direction.” However, as she progressed through the report, she encountered a concerning reality: 48% of professionals admit they lack the internal experience to implement it safely. This finding led her to reflect on the true challenge and the great opportunity for those of us working in the channel.

The adoption of AI alone is not enough; without adequate knowledge, organizations do not maximize its capabilities. AI not only helps but can also be harmful if not used correctly. Attackers are one step ahead, using deepfakes, AI-generated malware, and hyper-realistic phishing. Additionally, many users, especially younger ones, place too much trust in what they see on screens. Fortinet clearly describes it: AI “increases the speed, volume, and credibility of malicious campaigns.”

This situation opens a huge door for the channel. Every week, Melonia hears IT directors lamenting: “I can’t find AI profiles,” “I need engineers in cloud and cybersecurity,” “My specialists are leaving.” Fortinet confirms with numbers that 58% of companies are looking for engineering and network security profiles, 57% are seeking talent with AI experience, and the hardest positions to fill are in AI, machine learning, and cloud security (30%). This means that, if you have the knowledge and can provide training, the chances of closing projects skyrocket. Organizations have already understood that they cannot do it alone.

Melonia learned that teaching AI is also cybersecurity. Many times, we think that only specialists need to learn to use AI responsibly. However, AI literacy is for everyone. She realized that sales teams expose sensitive data by using GenAI without filters, designers share confidential information unknowingly, interns trust AI-generated answers without verifying them, and any employee can fall victim to AI-boosted phishing. Therefore, these two pillars became mandatory parts of her offering: 1. Responsible Use of Generative AI: do not upload sensitive information to public tools, review internal policies before using GenAI, always verify results, update tools and systems. 2. Identification of AI-Driven Threats: share less personal data, always use MFA, be critical of viral content (it can be deepfakes), update apps, browsers, and systems.

After minimal training, companies reduce incidents that were previously common. Fortinet already includes two specific modules on AI—one on generative AI and another on AI-driven threats—in its Security Awareness Training, which can be immediately incorporated into projects. The results will be evident: more informed clients, less frustration in tool adoption, and, above all, more comprehensive projects that go beyond just selling licenses. For Melonia, it is clear: you cannot defend what you do not understand. And AI is here to stay. If we do not teach our clients, AI will become their biggest vulnerability. Today, selling only firewalls, endpoints, or security suites is insufficient. Education is the added value that defines who wins projects and who falls behind. Choose your path: do not sell cybersecurity without teaching AI. And do not do it for trends, but because investing in AI literacy allows you to leverage its benefits and mitigate its risks. That balance is exactly what companies are looking for today and what the channel must offer.

AI Education: The New Defense That the Channel Cannot Ignore

ARTMERLOP S.A.S. de C.V.