top of page

Uh-Oh: AI Adoption is Outpacing AI Security Measures By A Lot

  • michelle1593
  • Jan 7
  • 3 min read

By Dave Carey



It’s an old story for those of us who focus on cybersecurity. When a new technology emerges as the absolute thing of the moment, people rush to adopt it so quickly that implementation runs way ahead of security controls.


A good example is the digitization of the trucking industry, which was behind the curve on that front for many years and suddenly, as a collective industry, moved with lightning speed to embrace everything from advanced TMS platforms to routing optimization to electronic driver logs and much more.


These tools quickly transformed the industry, and cyberattackers watched with glee because most trucking companies had not invested in the security capabilities that would protect themselves. The vulnerabilities were suddenly wide and varied, and now the industry is scrambling to catch up before more firms get hit with multi-million-dollar ransomware attacks.


So it doesn’t surprise me in the slightest that we see this happening again with AI.

Accenture’s 2025 State of Cybersecurity Resilience Report says that while AI adoption is proceeding at lightning speed, 90 percent of companies lack the ability to defend against AI-driven threats.


Some of the details in the report are deeply troubling:


This is compounded by the fact that only 36% of leaders acknowledge that the pace of AI's evolution is outstripping their own security protocols.
The research also found that 15% of employees would share company data or authorise payments through messaging apps without verifying the sender’s identity if the request appeared to originate from a manager or peer.
The confidence of the workforce may be misplaced. While four in five employees (81%) believe they can identify a phishing attempt, the data points to a serious overconfidence. This scenario presents a clear risk, especially when coupled with a lack of specific training against modern threats.

One of the biggest threats comes in the form of AI-driven social engineering, which is a fancy way of saying: AI sends someone in your company a fake request, suggestion or prompt, and they fall for it.


That could mean clicking an unsafe link, or opening a toxic attachment, or entering log-in credentials on an attacker-designed website. A lot of the AI-generated stuff is still kind of silly and easy to detect, but that’s not stopping some people from being fooled – and it only takes one person to expose your entire system to a breach.


And the AI stuff is getting more sophisticated every day.


Even worse, it appears that much of the workforce is overconfident about its ability to recognize these attacks. They’re getting more refined all the time, and people need training to be able to sniff them out. Most aren’t getting it.


So if you’re going to go all-in with the integration of AI – and most of you are – how do you protect your company? Here are some critical steps you should be taking now:


  • Familiarize yourself with the kinds of threats that come uniquely and specifically through AI, and develop a protocol to manage and protect against them.

  • With every step in your AI adoption, consult an expert to make sure you know the security layer that needs to accompany it – and don’t implement the one without the other.

  • Commit to constant training of your team members to recognize social engineering threats and to know what not to do in response to a likely attack inquiry.


I can help you with some more technical details, but above all else, take this seriously and stay vigilant about it. Yes, AI can help you in powerful ways. It can also be your system’s undoing if you don’t understand the havoc it can wreak.


As always, I’m happy to help. Call 616.217.3019 or e-mail dacarey@cybersynergies.io.

 

 
 
 

Comments


Image by Jared Arango

Address:

PO Box 56 

Byron Center MI 49315

Phone Number:

616-600-4180

Connect:

  • LinkedIn

© 2026 Created by Cybersynergies

bottom of page