[[{“value”:”
A critical security vulnerability in ChatGPT has been discovered that allows attackers to embed malicious SVG (Scalable Vector Graphics) and image files directly into shared conversations, potentially exposing users to sophisticated phishing attacks and harmful content.
The flaw, recently documented as CVE-2025-43714, affects the ChatGPT system through March 30, 2025.
Security researchers identified that instead of rendering SVG code as text within code blocks, ChatGPT inappropriately executes these elements when a chat is reopened or shared through public links.
This behavior effectively creates a stored cross-site scripting (XSS) vulnerability within the popular AI platform.
“The ChatGPT system through 2025-03-30 performs inline rendering of SVG documents instead of, for example, rendering them as text inside a code block, which enables HTML injection within most modern graphical web browsers,” said the researcher with handle zer0dac.
The security implications are significant. Attackers can craft deceptive messages embedded within SVG code that appear legitimate to unsuspecting users.

More concerning are the potential impacts on user wellbeing, as malicious actors could create SVGs with epileptic-inducing flashing effects that may harm photosensitive individuals.
The vulnerability works because SVG files, unlike regular image formats such as JPG or PNG, are XML-based vector images that can include HTML script tags, a legitimate feature of the format, but dangerous when improperly handled.
When these SVGs are rendered inline rather than as code, the embedded markup executes within the user’s browser.
“SVG files can contain embedded JavaScript code that executes when the image is rendered in a browser. This creates an XSS vulnerability where malicious code can be executed in the context of other users’ sessions,” explains a similar vulnerability report from a different platform.

OpenAI has reportedly taken initial mitigation steps by disabling the link-sharing feature after the vulnerability was reported, though a comprehensive fix addressing the underlying issue is still pending.
Security experts recommend that users exercise caution when viewing shared ChatGPT conversations from unknown sources.
The vulnerability is particularly concerning because most users implicitly trust content from ChatGPT and wouldn’t expect visual manipulation or phishing attempts through the platform.
“Even without JavaScript execution capabilities, visual and psychological manipulation still constitutes abuse, especially when it can impact someone’s wellbeing or deceive non-technical users,” security researcher noted.
This discovery highlights the growing importance of securing AI chat interfaces against traditional web vulnerabilities as they become more integrated into everyday workflows and communication channels.
Vulnerability Attack Simulation on How Hackers Rapidly Probe Websites for Entry Points – Free Webinar
The post ChatGPT Vulnerability Lets Attackers Embed Malicious SVGs & Images in Shared Chats appeared first on Cyber Security News.
“}]]
Read More Cyber Security News