Menlo Security report: Cybersecurity risks surge with AI adoption

0

[ad_1]

New research from Menlo Security reveals how the explosive growth of generative AI is creating new cybersecurity challenges for enterprises. As tools like ChatGPT become ingrained in daily workflows, businesses must urgently reassess their security strategies. 

“Employees are integrating AI into their daily work. Controls can’t just block it—but we can’t let it run wild either,” said Andrew Harding, VP of Product Marketing at Menlo Security, in an exclusive interview with VentureBeat. “There’s been consistent growth in generative AI site visits and power users in the enterprise, but challenges persist for security and IT teams. We need tools that apply controls to AI tooling and help CISOs manage this risk while supporting the productivity gains and the insights that GenAI can generate.”

A Surge in AI use and abuse

The new report from Menlo Security paints a concerning picture. Visits to generative AI sites within enterprises have skyrocketed over 100% in just the past 6 months. The number of frequent generative AI users has likewise jumped 64% over the same period. But this ubiquitous integration into daily workflows has blown open dangerous new vulnerabilities.

While many organizations are commendably enacting more security policies around generative AI usage, most are utilizing an ineffective domain-by-domain approach according to researchers. As Harding told VentureBeat, “Organizations are beefing up security measures, but there’s a catch. Most are only applying these policies on a domain basis, which isn’t cutting it anymore.”

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 

Request an invite

This piecemeal tactic simply can’t keep pace as new generative AI platforms constantly emerge. The report revealed attempted file uploads to generative AI sites spiked an alarming 80% over 6 months — a direct result of added functionality. And the risks go far beyond potential data loss through uploads.

Researchers warn generative AI may seriously amplify phishing scams as well. As Harding noted, “AI-powered phishing is just smarter phishing. Enterprises need real-time phishing protection that would prevent the OpenAI ‘phish’ from ever being a problem in the first place.”

From novelty to necessity

So how did we get here? Generative AI seemingly exploded overnight with ChatGPT-mania sweeping the globe. But the technology actually emerged gradually over years of research.

OpenAI launched its first generative AI system called GPT-1 (Generative Pre-trained Transformer) back in June 2018. This and other early systems were limited but demonstrated the potential. In April 2022, Google Brain built upon this with PaLM — an AI model boasting 540 billion parameters.

When OpenAI unveiled DALL-E for image generation in early 2021, generative AI captured widespread public intrigue. But it was OpenAI’s ChatGPT debut in November 2022 that truly ignited the frenzy.

Almost immediately, users began integrating ChatGPT and similar tools into their daily workflows. People casually queried the bot for everything from crafting the perfect email to debugging code. It appeared the AI could do almost anything.

But for businesses, this meteoric integration introduced major risks often overlooked in the hype. Generative AI systems are inherently only as secure, ethical, and accurate as the data used to train them. They can unwittingly expose biases, share misinformation, and transfer sensitive data.

These models pull training data from vast swaths of the public internet. Without rigorous monitoring, there is limited control over what content is ingested. So if proprietary information gets posted online, models can easily absorb this data — and later divulge it.

Researchers also warn generative AI may seriously amplify phishing scams as well. As Harding told VentureBeat, “AI-powered phishing is just smarter phishing. Enterprises need real-time phishing protection that would prevent the OpenAI ‘phish’ from ever being a problem in the first place.”

The balancing act

So what can be done to balance security and innovation? Experts advocate for a multi-layered approach. As Harding recommends, this includes “copy and paste limits, security policies, session monitoring, and group-level controls across generative AI platforms.”

The past proves prologue. Organizations must learn from previous technological inflection points. Widely used technologies like cloud, mobile, and the web intrinsically introduced new risks. Companies progressively adapted security strategies to align with evolving technology paradigms over time.

The same measured, proactive approach is required for generative AI. The window to act is rapidly closing. As Harding cautioned, “There’s been consistent growth in generative AI site visits and power users in the enterprise, but challenges persist for security and IT teams.”

Security strategies must evolve — and quickly — to match the unprecedented adoption of generative AI across organizations. For businesses, it is imperative to find the balance between security and innovation. Otherwise, generative AI risks spiraling perilously out of control.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

[ad_2]

Source link

You might also like
Leave A Reply

Your email address will not be published.