What We Do
How We Do
Resources
Company
Partners
Get Started
Blog

AI May Be A Powerful Tool, But It’s No Substitute for Cyber Experts

BY Jeff Schwartzentruber

September 23, 2024 | 3 MINS READ

AI/ML

Generative AI

Want to learn more on how to achieve Cyber Resilience?

TALK TO AN EXPERT

Originally posted on the thestar.com on September 14, 2024.

This is the year for artificial intelligence (AI) integration — it’s now embedded in nearly everything we do. AI systems increasingly control our environment through IoT devices and beyond. IoT devices connect our physical environments to the digital world, and more than 15 billion such devices are now connected worldwide. That number is expected to double by 2030.

We are also interacting ever more naturally with AI through chatbots.

As with any emerging technology, it started small: summarizing emails and writing limited responses, arguing with customer service chatbots for service changes and refunds, and asking bots for travel recommendations. However, soon AI will extend to every aspect of our work and homes.

And there’s much more in store. As the technology advances, its ability to predict and protect against new cyber threats will be vital for safeguarding and maintaining trust in our interconnected world. AI also promises to improve communication and training within the cyber industry, simplifying complex technical concepts and making them more accessible to a wider audience.

That’s the good news. Now for the bad news.

AI systems will inevitably become targets for malicious actors. This is especially relevant to the next generation of systems, which have been shown to act in unexpected ways, such as exposing private and sensitive information.

AI-driven security solutions also risk producing many false alarms, leading to unnecessary alerts that burden information security personnel or overlook genuine threats.

As AI evolves, it can potentially eliminate human technical expertise in cybersecurity. Yet balancing AI-driven automation and human oversight is crucial — an essential part of any robust cyber operation.

The next significant boost in the AI revolution will happen when these systems, which are relatively isolated, group together in a larger intelligence: a vast network of power generation and consumption with each building just a node, like an ant colony or a human army.

Future industrial-control systems will include traditional factory robots and AI systems to schedule their operation. They will automatically order supplies and coordinate final product shipping. They will call on humans to repair individual subsystems or do things that are too specialized for robots when needed.

But our newest robots will be very different from previous models. Their sensors and actuators will be distributed in the environment, and their processing will be dispersed. They’ll be a network that become robots only in the aggregate.

This will turn our notion of security on its head. If massive, decentralized AIs run everything, then who controls those AIs matters a lot.

It’s as if all the executive assistants or lawyers in an industry worked for the same agency; an AI that is both trusted and trustworthy will become a critical requirement.

This future requires us to see ourselves less as individuals and more as parts of larger systems. It’s AI as nature, as Gaia—everything is one system. It’s a future more aligned with the Buddhist philosophy of interconnectedness than Western ideas of individuality. (It also aligns with science-fiction dystopias, like Skynet from the Terminator movies.)

It will require rethinking many of our assumptions about governance and the economy. That won’t happen soon, but in 2024, we will likely see the first steps along that path.

That’s why the European Union’s passing of the Artificial Intelligence Act in March of this year couldn’t have come at a better time. This legislation bans high-risk AI applications, such as certain biometric and facial recognition systems, social scoring mechanisms, and AI designed for manipulation or exploitation. It imposes strict rules on high-risk AI systems in critical domains, such as infrastructure, education, and employment, requiring risk assessment, transparency, and human oversight.

Despite criticism from the industry for potentially hampering innovation and competitiveness, and from advocacy groups for not fully addressing ethical concerns, the phased implementation aims to balance regulation with practicality.

As innovators continues to propel the evolution of AI, many countries will remain significant contributors. But this evolution must be informed by policymakers and lawmakers who truly understand the potential benefits, but also the very considerable risks.

Jeff Schwartzentruber
Jeff Schwartzentruber Senior Machine Learning Scientist

Dr. Jeff Schwartzentruber is a Sr. Machine Learning Scientist at eSentire. His primary academic and industry research has been concentrated in solving problems at the intersection of cybersecurity and machine learning (ML). For 10+ years, Dr. Schwartzentruber has been involved in applying ML for threat detection and security analytics for several large Canadian financial institutions, federal public sector organizations, and SMEs. In addition to his private sector work, Dr. Schwartzentruber is also an Adjunct Faculty at Dalhousie University in the Department of Computer Science, a Special Graduate Faculty member with the School of Computer Science at the University of Guelph, and the Sr. Advisor of AI at the Rogers Cybersecure Catalysts.

Read the Latest from eSentire