OpenAI Plans to Test Watermarking for GPT-4o Images
According to comprehensive foreign media reports, OpenAI has recently begun a new test on the image generation feature of its latest ChatGPT-4o, adding an “ImageGen” watermark to images generated for free users.
Speculation regarding the potential reasons for this new test suggests that an increasing number of free users are utilizing the ImageGen model to generate images resembling those of Studio Ghibli. AI researcher Tibor Blaho pointed out on the social platform Threads that OpenAI is experimenting with two forms of watermarking: “visible” and “invisible.” The images generated for free ChatGPT users will be marked with an obvious “ImageGen” watermark, while paid subscription users will not have watermarked images.
OpenAI Implements Dual Verification Mechanism with AI Watermarking and Invisible Metadata
According to WinBuzzer, in addition to the visible ImageGen watermark, OpenAI’s strategy also includes a “dual verification mechanism.” This involves embedding invisible metadata in the generated images based on the Coalition for Content Provenance and Authenticity (C2PA) standard, which helps identify timestamps, software labels, and source labels, ensuring the validity of content sources. For example, OpenAI has deployed C2PA metadata in DALL·E 3 image generation to track content origins.
However, OpenAI has previously acknowledged that the metadata verification system has certain limitations. If images are cropped, screenshot, or uploaded to platforms that can remove metadata, these invisible markers may become ineffective.
Despite this, OpenAI continues to actively support legal regulations for AI watermarking technology. For instance, OpenAI, along with Adobe and Microsoft, supports California’s AB 3211 bill, which mandates technology companies to label AI-generated content to reduce the risk of misinformation dissemination.
Google and Microsoft Also Adopt AI Watermarking
Not only OpenAI, but many tech giants are also seeking ways to verify AI-generated content. For example, Google plans to expand the use of the SynthID system developed by DeepMind to Google Photos in February 2025. This technology can be applied not only to images generated entirely by AI but also to images edited by AI, adding invisible watermarks hidden within the pixels of the images.
Microsoft also introduced watermarking technology through its Azure OpenAI service in September 2024, embedding encrypted metadata into images generated by DALL·E, recording details such as who generated the image, when it was generated, and which software was used. Microsoft has also collaborated with Adobe, Truepic, and the BBC to establish a unified content verification standard across different platforms.
Can Watermarking Technology Be Challenged?
Watermarking technology is not immune to being cracked. In October 2023, researchers from the University of Maryland published a paper indicating that AI watermarks could be defeated using a method called “diffusion purification.” By adding noise to the image and then removing it, these invisible watermarks can be effectively eliminated. Additionally, diffusion purification can also forge fake watermarks, making images appear as if they were generated by AI.
The research team noted that solely relying on watermarks may not provide adequate protection against media manipulation or misinformation.
This article is republished in cooperation with: Digital Age
Source: WinBuzzer, Bleeping Computer