Cyber Threat Intelligence report ‘Cyber Signals’ Published
Microsoft has collaborated with OpenAI and has today published new intelligence that identified and detailed state-affiliated adversaries that are using AI in their attack techniques and how Microsoft and OpenAI have managed to detect, disrupt and counteract their malicious activity.
“The world of cybersecurity is undergoing a massive transformation and Artificial Intelligence is at the forefront of this change, posing both a threat but also an opportunity,” said Microsoft.
“But while cybercriminals can use AI as part of their exploits, AI also has the potential to empower organizations to defeat cyberattacks at machine speed and drive innovation and efficiency in threat detection, hunting, and incident response,” added Microsoft.
Microsoft noted that the objective of its research partnership with OpenAI is to ensure the safe and responsible use of AI technologies like ChatGPT and that these uphold the highest standards of ethical application to protect the community from potential misuse.
The report noted how every day, 2.5 billion cloud-based, AI-driven detections protect Microsoft customers but over and above this, Microsoft also detects more than 65 million cybersecurity signals per day.
In its report, Microsoft listed Forest Blizzard (STRONTIUM) asa “highly effective Russian military intelligence actor linked to The Main Directorate of the General Staff of the Armed Forces of the Russian or GRU Unit 26165 which has targeted victims of tactical and strategic interest to the Russian government, Emerald Sleet (Velvet Chollima) a North Korean threat actor that impersonates reputable academic institutions and NGOs to lure victims into replying with expert insights and commentary about foreign policies related to North Korea and Crimson Sandstorm (CURIUM) an Iranian threat actor assessed to be connected to the Islamic Revolutionary Guard Corps (IRGC).
Microsoft’s report also referenced Charcoal Typhoon (CHROMIUM) a China-affiliated threat actor predominantly focused on tracking groups in Taiwan, Thailand, Mongolia, Malaysia, France, Nepal, and individuals globally that oppose China’s policies as well as Salmon Typhoon, another China-backed group that has been assessing the effectiveness of using LLMs throughout 2023 to source information on potentially sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs.
“AI is boosting our ability to analyse this information and ensure that the most valuable insights are surfaced to help stop threats. We also use this signal intelligence to power Generative AI for advanced threat protection, data security, and identity security to help defenders catch what others miss,” stated Microsoft in its report.
“A more secure future with AI will require fundamental advances in software engineering. It will require us to understand and counter AI-driven threats as essential components of any security strategy by working together to build deep collaboration and partnerships across public and private sectors to combat bad actors,” added Microsoft.
Amongst the emerging AI threats, Microsoft referred to the critical concern being posed by AI-powered fraud such as voice synthesis where a three-second voice sample can train a model to sound like anyone stressing the importance of understanding how malicious actors use AI to undermine longstanding identity proofing systems “so we can tackle complex fraud cases and other emerging social engineering threats that obscure identities.”
As a reaffirmation of its commitment to ensuring the safety and integrity of the global tech sector, Microsoft, in its cyber threat intelligence report ‘Cyber Signals’, also announced a new set of Microsoft AI principles to prevent the misuse of our AI tools by malicious actors.
“Microsoft remains committed to responsible human-led AI featuring privacy and security with humans providing oversight, evaluating appeals, and interpreting policies and regulations,” concluded Microsoft.
