The Battlefield
What began off as excitement across the capabilities of Generative AI has quickly turned to concern. Generative AI tools corresponding to ChatGPT, Google Bard, Dall-E, etc. proceed to make headlines as a consequence of security and privacy concerns. It’s even resulting in questioning about what’s real and what is not. Generative AI can pump out highly plausible and due to this fact convincing content. A lot in order that on the conclusion of a recent 60 Minutes segment on AI, host Scott Pelley left viewers with this statement; “We’ll end with a note that has never appeared on 60 Minutes, but one, within the AI revolution, it’s possible you’ll be hearing often: the preceding was created with 100% human content.”
The Generative AI cyber war begins with this convincing and real-life content and the battlefield is where hackers are leveraging Generative AI, using tools corresponding to ChatGPT, etc. It’s extremely easy for cyber criminals, especially those with limited resources and nil technical knowledge, to perform their crimes through social engineering, phishing and impersonation attacks.
The Threat
Generative AI has the facility to fuel increasingly more sophisticated cyberattacks.
Since the technology can produce such convincing and human-like content with ease, latest cyber scams leveraging AI are harder for security teams to simply spot. AI-generated scams can are available in the shape of social engineering attacks corresponding to multi-channel phishing attacks conducted over email and messaging apps. An actual-world example could possibly be an email or message containing a document that is distributed to a company executive from a 3rd party vendor via Outlook (Email) or Slack (Messaging App). The e-mail or message directs them to click on it to view an invoice. With Generative AI, it could possibly be almost inconceivable to differentiate between a fake and real email or message. Which is why it’s so dangerous.
Probably the most alarming examples, nonetheless, is that with Generative AI, cybercriminals can produce attacks across multiple languages – no matter whether the hacker actually speaks the language. The goal is to solid a large net and cybercriminals won’t discriminate against victims based on language.
The advancement of Generative AI signals that the size and efficiency of those attacks will proceed to rise.
The Defense
Cyber defense for Generative AI has notoriously been the missing piece to the puzzle. Until now. Through the use of machine to machine combat, or pinning AI against AI, we will defend against this latest and growing threat. But how should this strategy be defined and the way does it look?
First, the industry must act to pin computer against computer as an alternative of human vs computer. To follow through on this effort, we must consider advanced detection platforms that may detect AI-generated threats, reduce the time it takes to flag and the time it takes to unravel a social engineering attack that originated from Generative AI. Something a human is unable to do.
We recently conducted a test of how this may look. We had ChatGPT cook up a language-based callback phishing email in multiple languages to see if a Natural Language Understanding platform or advanced detection platform could detect it. We gave ChatGPT the prompt, “write an urgent email urging someone to call a few final notice on a software license agreement.” We also commanded it to write down it in English and Japanese.
The advanced detection platform was immediately in a position to flag the emails as a social engineering attack. BUT, native email controls corresponding to Outlook’s phishing detection platform couldn’t. Even before the discharge of ChatGPT, social engineering done via conversational, language-based attacks proved successful because they may dodge traditional controls, landing in inboxes with no link or payload. So yes, it takes machine vs. machine combat to defend, but we must also ensure that we’re using effective artillery, corresponding to a complicated detection platform. Anyone with these tools at their disposal has a bonus within the fight against Generative AI.
With regards to the size and plausibility of social engineering attacks afforded by ChatGPT and other types of Generative AI, machine to machine defense can be refined. For instance, this defense will be deployed in multiple languages. It also doesn’t just must be limited to email security but will be used for other communication channels corresponding to apps like Slack, WhatsApp, Teams etc.
Remain Vigilant
When scrolling through LinkedIn, one among our employees got here across a Generative AI social engineering attempt. A wierd “whitepaper” download ad appeared with what can only be described generously as “bizarro” ad creative. Upon closer inspection, the worker saw a telltale color pattern within the lower right corner stamped on images produced by Dall-E, an AI model that generates images from text-based prompts.
Encountering this fake LinkedIn ad was a big reminder of latest social engineering dangers now appearing when coupled with Generative AI. It’s more critical than ever to be vigilant and suspicious.
The age of generative AI getting used for cybercrime is here, and we must remain vigilant and be prepared to fight back with every tool at our disposal.