enterprise

Google, Microsoft, OpenAI make AI pledges ahead of Munich Security Conference


In the so-called cybersecurity “defender’s dilemma,” the good guys are always running, running, running and keeping their guard up at all times — while attackers, on the other hand, only need one small opportunity to break through and do some real damage. 

But, Google says, defenders should embrace advanced AI tools to help disrupt this exhausting cycle.

To support this, the tech giant today released a new “AI Cyber Defense Initiative” and made several AI-related commitments ahead of the Munich Security Conference (MSC) kicking off tomorrow (Feb. 16). 

The announcement comes one day after Microsoft and OpenAI published research on the adversarial use of ChatGPT and made their own pledges to support “safe and responsible” AI use. 

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

As government leaders from around the world come together to debate international security policy at MSC, it’s clear that these heavy AI hitters are looking to illustrate their proactiveness when it comes to cybersecurity

“The AI revolution is already underway,” Google said in a blog post today. “We’re… excited about AI’s potential to solve generational security challenges while bringing us close to the safe, secure and trusted digital world we deserve.”

In Munich, more than 450 senior decision-makers and thought and business leaders will convene to discuss topics including technology, transatlantic security and global order. 

“Technology increasingly permeates every aspect of how states, societies and individuals pursue their interests,” the MSC states on its website, adding that the conference aims to advance the debate on technology regulation, governance and use “to promote inclusive security and global cooperation.”

AI is unequivocally top of mind for many global leaders and regulators as they scramble to not only understand the technology but get ahead of its use by malicious actors. 

As the event unfolds, Google is making commitments to invest in “AI-ready infrastructure,” release new tools for defenders and launch new research and AI security training

Readers Also Like:  Stock market news today: Stocks smoked, Nasdaq falls over 3.5% in worst day since 2022 after Tesla, Alphabet trigger Big Tech sell-off - Yahoo Finance

Today, the company is announcing a new “AI for Cybersecurity” cohort of 17 startups from the U.S., U.K. and European Union under the Google for Startups Growth Academy’s AI for Cybersecurity Program. 

“This will help strengthen the transatlantic cybersecurity ecosystem with internationalization strategies, AI tools and the skills to use them,” the company says. 

Google will also:

  • Expand its $15 million Google.org Cybersecurity Seminars Program to cover all of Europe and help train cybersecurity professionals in underserved communities.
  • Open-source Magika, a new, AI-powered tool aimed to help defenders through file type identification, which is essential to detecting malware. Google says the platform outperforms conventional file identification methods, providing a 30% accuracy boost and up to 95% higher precision on content such as VBA, JavaScript and Powershell that is often difficult to identify. 
  • Provide $2 million in research grants to support AI-based research initiatives at the University of Chicago, Carnegie Mellon University and Stanford University, among others. The goal is to enhance code verification, improve understanding of AI’s role in cyber offense and defense and develop more threat-resistant large language models (LLMs). 

Furthermore, Google points to its Secure AI Framework — launched last June — to help organizations around the world collaborate on best practices to secure AI. 

“We believe AI security technologies, just like other technologies, need to be secure by design and by default,” the company writes. 

Ultimately, Google emphasizes that the world needs targeted investments, industry-government partnerships and “effective regulatory approaches” to help maximize AI value while limiting its use by attackers. 

“AI governance choices made today can shift the terrain in cyberspace in unintended ways,” the company writes. “Our societies need a balanced regulatory approach to AI usage and adoption to avoid a future where attackers can innovate but defenders cannot.”

Microsoft, OpenAI combating malicious use of AI

In their joint announcement this week, meanwhile, Microsoft and OpenAI noted that attackers are increasingly viewing AI as “another productivity tool.”

Notably, OpenAI said it has terminated accounts associated with five state-affiliated threat actors from China, Iran, North Korea and Russia. These groups used ChatGPT to: 

  • Debug code and generate scripts
  • Create content likely for use in phishing campaigns
  • Translate technical papers
  • Retrieve publicly available information on vulnerabilities and multiple intelligence agencies
  • Research common ways malware could evade detection
  • Perform open-source research into satellite communication protocols and radar imaging technology
Readers Also Like:  Amazon MMO Blue Protocol canceled ahead of western release

The company was quick to point out, however, that “our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks.” 

The two companies have pledged to ensure the “safe and responsible use” of technologies including ChatGPT. 

For Microsoft, these principles include:  

  • Identifying and acting against malicious threat actor use, such as disabling accounts or terminating services. 
  • Notifying other AI service providers and sharing relevant data. 
  • Collaborating with other stakeholders on threat actors’ use of AI. 
  • Informing the public about detected use of AI in their systems and measures taken against them. 

Similarly, OpenAI pledges to: 

  • Monitor and disrupt malicious state-affiliated actors. This includes determining how malicious actors are interacting with their platform and assessing broader intentions. 
  • Work and collaborate with the “AI ecosystem”
  • Provide public transparency about the nature and extent of malicious state-affiliated actors’ use of AI and measures taken against them. 

Google’s threat intelligence team said in a detailed report released today that it tracks thousands of malicious actors and malware families, and has found that: 

  • Attackers are continuing to professionalize operations and programs
  • Offensive cyber capability is now a top geopolitical priority
  • Threat actor groups’ tactics now regularly evade standard controls
  • Unprecedented developments such as the Russian invasion of Ukraine mark the first time cyber operations have played a prominent role in war 

Researchers also “assess with high confidence” that the “Big Four” China, Russia, North Korea and Iran will continue to pose significant risks across geographies and sectors. For instance, China has been investing heavily in offensive and defensive AI and engaging in personal data and IP theft to compete with the U.S. 

Google notes that attackers are notably using AI for social engineering and information operations by developing ever more sophisticated phishing, SMS and other baiting tools, fake news and deepfakes. 

Readers Also Like:  DDM: 2022 was second biggest year for gaming investment

“As AI technology evolves, we believe it has the potential to significantly augment malicious operations,” researchers write. “Government and industry must scale to meet these threats with strong threat intelligence programs and robust collaboration.”

Upending the ‘defenders dilemma’

On the other hand, AI supports defenders’ work in vulnerability detection and fixing, incident response and malware analysis, Google points out. 

For instance, AI can quickly summarize threat intelligence and reports, summarize case investigations and explain suspicious script behaviors. Similarly, it can classify malware categories and prioritize threats, identify security vulnerabilities in code, run attack path simulations, monitor control performance and assess early failure risk. 

Furthermore, Google says, AI can help non-technical users generate queries from natural language; develop security orchestration, automation and response playbooks; and create identity and access management (IAM) rules and policies.

Google’s detection and response teams, for instance, are using gen AI to create incident summaries, ultimately recovering more than 50% of their time and yielding higher-quality results in incident analysis output. 

The company has also improved its spam detection rates by roughly 40% with the new multilingual neuro-based text processing model RETVec. And, its Gemini LLM is fixing 15% of bugs discovered by sanitizer tools and providing code coverage increases of up to 30% across more than 120 projects, leading to new vulnerability detections. 

In the end, Google researchers assert, “We believe AI affords the best opportunity to upend the defender’s dilemma and tilt the scales of cyberspace to give defenders a decisive advantage over attackers.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.