Microsoft announces principles for dealing with threat actors who are using AI


OpenAI and Microsoft have published findings on the emerging threats in the rapidly evolving domain of AI showing that threat actors are incorporating AI technologies into their arsenal, treating AI as a tool to enhance their productivity in conducting offensive operations. 

They have also announced principles shaping Microsoft’s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates they track.

Despite the adoption of AI by threat actors, the research has not yet pinpointed any particularly innovative or unique AI-enabled tactics that could be attributed to the misuse of AI technologies by these adversaries. This indicates that while the use of AI by threat actors is evolving, it has not led to the emergence of unprecedented methods of attack or abuse, according to Microsoft in a blog post

However, both OpenAI and its partner, along with their associated networks, are monitoring the situation to understand how the threat landscape might evolve with the integration of AI technologies. 

They are committed to staying ahead of potential threats by closely examining how AI can be used maliciously, ensuring preparedness for any novel techniques that may arise in the future. 

“The objective of Microsoft’s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models,” Microsoft stated in the blog post. “In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including Microsoft Copilot for Security, to elevate defenders everywhere.

The principles outlined by Microsoft include:

  1. Identification and action against malicious threat actors’ use.
  2. Notification to other AI service providers.
  3. Collaboration with other stakeholders.
  4. Transparency to the public and stakeholders about actions taken under these threat actor principles.



Source link