Shoutbox

Loading
Loading ...





Smilies


Certified Domain Seal


Menu


Search



Advanced Search


Stats

pages views since
05/19/2016 : 142914

 · Members : 7
 · News : 806
 · Downloads : 0
 · Links : 0


Partner Groups


Microsoft Files Lawsuit Against Hacking Group Using Azure AI
Posted by Okachinepa on 01/12/2025 @ 
SynEVOL Source
AI for Harmful Content Creation
Courtesy of SynEvol
Credit: Ravie Lakshmanan



A "foreign-based threat–actor group" is being sued by Microsoft for using a hacking-as-a-service infrastructure to purposefully circumvent the security measures of its generative artificial intelligence (AI) services and create offensive and damaging content.

The threat actors "developed sophisticated software that exploited exposed customer credentials scraped from public websites," according to the tech giant's Digital Crimes Unit (DCU), and "sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services."

After using these services, like Azure OpenAI Service, the enemies made money by selling the access to other malevolent actors and giving them comprehensive instructions on how to utilize these specialized tools to produce damaging content. According to Microsoft, the behavior was identified in July 2024.

In order to stop such conduct from happening again, the Windows manufacturer claimed it has since removed the threat-actor group's access, strengthened its security, and put additional countermeasures in place. Additionally, it claimed to have secured a court order to take control of a website ("aitism[.]net") that was essential to the group's illegal activity.

The widespread use of AI technologies such as OpenAI ChatGPT has also resulted in threat actors misusing them for nefarious purposes, such as creating malware or illegal content. Microsoft and OpenAI have consistently revealed that their services are being used for disinformation campaigns, translation, and reconnaissance by nation-state entities from China, Iran, North Korea, and Russia.

According to court filings, the operation was carried out by at least three unidentified persons who used customer Entra ID authentication details and stolen Azure API credentials to hack Microsoft servers and exploit DALL-E to produce malicious photos in violation of the company's acceptable use policy. Their services and tools are thought to have been employed for similar reasons by seven other parties.

Although it is currently unknown how the API keys are obtained, Microsoft said that the defendants committed "systematic API key theft" from a number of clients, including a number of American businesses, some of which are based in Pennsylvania and New Jersey.

"Using stolen Microsoft API Keys that belonged to U.S.-based Microsoft customers, defendants created a hacking-as-a-service scheme – accessible via infrastructure like the 'rentry.org/de3u' and 'aitism.net' domains – specifically designed to abuse Microsoft's Azure infrastructure and software," according to the filing.

A GitHub project that has since been deleted claims that de3u is a "DALL-E 3 frontend with reverse proxy support." The aforementioned GitHub account was established on November 8, 2023.

Following the capture of "aitism[.]net," the threat actors allegedly attempted to "cover their tracks, including by attempting to delete certain Rentry.org pages, the GitHub repository for the de3u tool, and portions of the reverse proxy infrastructure."

Microsoft saw that in order to illegally create thousands of damaging images using text prompts, the threat actors used de3u and a custom reverse proxy service known as the oai reverse proxy to conduct Azure OpenAl Service API requests using the stolen API keys. What kind of offensive imagery was produced is unknown.

The purpose of the server-based oai reverse proxy service is to route user computer connections into the Azure OpenAI Service via a Cloudflare tunnel, then return the replies to the user's device.

"The de3u software allows users to issue Microsoft API calls to generate images using the DALL-E model through a simple user interface that leverages the Azure APIs to access the Azure OpenAI Service," Redmond stated.

In order to send queries that are intended to resemble authentic Azure OpenAPI Service API calls, the defendants' de3u application uses undocumented Microsoft network APIs to interface with Azure servers. Additional authenticating information and stolen API keys are used to validate these requests.

The use of proxy services to gain unauthorized access to LLM services was brought to light by Sysdig in May 2024 in relation to an LLMjacking attack campaign that used stolen cloud credentials to target AI offerings from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure, Mistral, and OpenAI. The actors then sold the access to other parties.

In order to accomplish their shared illicit goals, defendants have carried out the operations of the Azure Abuse Enterprise through a coordinated and ongoing pattern of illegal action," Microsoft stated.

"The defendants' illicit behavior pattern extends beyond their attacks on Microsoft. Other AI service providers have been the target and victims of Azure Abuse Enterprise, according to evidence Microsoft has discovered so far.