Featured Article : ChatGPT Gets Image Upgrade and Fewer Restrictions

ChatGPT’s image generation tools have just received a major upgrade, and while everyone is busy turning politicians into dreamy ‘Studio Ghibli’ characters, OpenAI’s (quiet) policy changes may prove to be the bigger story.
A New Visual Brain for ChatGPT
At the centre of this update is GPT-4o, OpenAI’s new omnimodal model. Unlike previous tools like DALL·E (which bolted image generation onto ChatGPT from the outside), GPT-4o builds it into the core of the chatbot. The result appears to be faster, sharper, and more context-aware image outputs.
This means ChatGPT can now:
– Generate photorealistic images directly from prompts.
– Follow detailed instructions more precisely (including rendering readable text in images).
– Edit existing pictures, including transforming or “inpainting” people and objects.
The upgrade is currently available for Pro users paying $200/month, with OpenAI promising access will “soon” be extended to Plus and free-tier users as well as API developers.
Users Experimenting
It’s been reported that users have been experimenting with the upgraded version, including uploading selfies and asking ChatGPT to transform them into Pixar-style avatars or place them in scenic landscapes. Others have reportedly been feeding the tool prompts like “Donald Trump in a Ghibli forest” (a dreamy, nature-filled setting inspired by Studio Ghibli films) or “South Park characters debating in Parliament” and got eerily convincing results.
Why Studio Ghibli Is Suddenly Everywhere
It seems it didn’t take long for social media to explode with whimsical, pastel-hued images that seemed plucked straight from ‘My Neighbour Totoro’ or ‘Spirited Away’, i.e. two of Studio Ghibli’s most iconic animated films. The reason is likely to be because GPT-4o’s new model was trained on a wide range of styles, including those reminiscent of iconic animation.
While OpenAI insists it avoids mimicking the work of any living artist (it actively blocks prompts that explicitly request such imitations) it seems clear to many that the model can now reproduce stylistic “vibes” with uncanny accuracy. This explains how the internet managed to flood X and Instagram with Ghibli-inspired memes within days.
However, this artistic mimicry has raised some eyebrows. A resurfaced 2016 video of Studio Ghibli co-founder Hayao Miyazaki calling AI-generated art “an insult to life itself” has been doing the rounds again, reigniting the debate around AI and artistic originality.
Soften the Rules, Sharpen the Debate
Perhaps the most quietly controversial part of this launch is what OpenAI removed. GPT-4o comes with a notably relaxed set of safeguards around image generation. While safety features still exist, especially for minors and violent or abusive content, the rules have actually changed quite significantly. For example, ChatGPT can now:
– Generate images of public figures like Elon Musk or Donald Trump.
– Depict racial features and body characteristics on request.
– Show hateful symbols (like swastikas) if done in educational or neutral contexts.
– Mimic the aesthetic of well-known studios (e.g. Pixar, Ghibli), though not named living artists.
Joanne Jang (OpenAI’s model behaviour expert) has explained the move as a shift from blanket refusals to more nuanced moderation, saying, “We’re focusing on preventing real-world harm,” and “Not just avoiding discomfort.”
For example, ChatGPT used to reject prompts like “make this person heavier” or “add Asian features,” assuming them to be inherently offensive. Now, such requests are allowed if they are presented in a neutral or user-specific context.
This reflects OpenAI’s broader philosophy that censorship by default may suppress creativity or unfairly judge user intent. As Jang wrote in a recent blog post, “Ships are safest in the harbour,” adding “but that’s not what ships — or models — are for.”
Safety Isn’t Gone, It’s Just Different
That’s not to say the floodgates are wide open. Despite the apparent loosening of rules, the new image generator still uses a layered safety stack that includes:
– Prompt blocking (for inappropriate text before image generation).
– Output blocking (for images that breach policy after they’re made).
– A sophisticated moderation system, including child safety classifiers.
– Refusal triggers for prompts involving living artists or sexualised content.
Also, unlike earlier tools, GPT-4o seems especially cautious around children. It won’t allow editing uploaded images of realistic children and applies stronger classifiers to detect potential misuse involving minors.
Performance metrics from OpenAI’s system card also show the updated safety stack performs better than previous versions, especially in areas like gender and racial diversity. It’s been reported that in one test, 4o image generation produced diverse outputs 100 per cent of the time for group prompts compared to 80–89 per cent for DALL·E 3.
The Benefits for Users
The new capabilities have clear commercial potential. Designers, marketers, developers and content creators can now produce custom visuals, mockups, product renders, and marketing illustrations with minimal friction. For example:
– A property developer could quickly visualise housing concepts in different styles.
– An education provider could create bespoke, text-rich diagrams for course materials.
– A social media agency could mock up viral meme formats in seconds.
With enhanced control over composition, text, and detail, plus the ability to edit and iterate on existing images, GPT-4o appears to be taking AI image generation a step closer to mainstream creative workflows. Also, with API access rolling out, this could also give rise to entirely new applications built on top of GPT-4o, from instant avatar builders to interior design preview tools.
Risks, Especially Around Trust and IP
Despite the excitement, this change is far from risk-free. For example, allowing depictions of public figures or sensitive racial and political symbols opens the door to misinformation, reputational damage, and potential misuse.
Even if OpenAI prohibits images that “praise extremist agendas,” critics worry that fringe users could find ways to skirt those limits or that mainstream users might be unaware of the implications of what they’re creating.
There’s also the ever-present issue of copyright. Training on “publicly available” data and corporate partnerships (e.g. with Shutterstock) may cover some ground but as Studio Ghibli-style memes go viral, the question of fair use resurfaces.
For businesses, this raises two key concerns:
1. Reputational risk. Could AI-generated visuals be misattributed, manipulated, or used maliciously?
2. Legal exposure. Could brand-generated content be seen as infringing on artistic or personal likeness rights?
As with previous AI developments, what’s technically possible may soon outpace what’s legally clear or culturally acceptable.
What It Means for the Wider AI Landscape
OpenAI’s move comes just weeks after Google faced backlash over Gemini’s historical inaccuracies and image bias, and amid growing political scrutiny. In the US, Republican lawmakers are probing tech firms over alleged censorship, a backdrop that likely informed OpenAI’s more libertarian-leaning policy update.
By relaxing its image generation rules now, OpenAI seems to be signalling that it trusts both its technology and its users enough to let go of some of the training wheels, and is (presumably) willing to weather the inevitable criticism if it means retaining or gaining ground against rising competitors like MetaAI.
What Does This Mean For Your Business?
OpenAI’s latest update appears to have placed ChatGPT on a new creative footing – one that blends impressive technical progress with a deliberately looser grip on content control. In doing so, the company looks like steering away from the more cautious posture that has defined much of the AI sector to date (with the notable exception of Grok). Whether that’s a bold move or a risky one depends very much on how the public, regulators, and commercial users respond in the months ahead.
For UK businesses in particular, the upgrade could mean the ability to generate high-quality, editable, and highly specific imagery using a chatbot could significantly reduce production times for everything from ad campaigns to training materials. The tools now on offer may make it far easier for SMEs and creative agencies to iterate visually without relying on third-party design services, a potential leveller in an increasingly competitive digital landscape. For marketing teams, the prospect of generating branded content, explainer graphics, or social media visuals with a single prompt is clearly appealing.
However, those same businesses will need to tread carefully. As copyright debates heat up and content provenance tools remain in their early stages, there’s a real risk that missteps (however unintentional) could carry legal or reputational consequences. The temptation to experiment with viral visual styles or public figures is likely to be strong, but so will the scrutiny. Companies looking to incorporate these tools into their workflows will likely need internal guidance, or even new policies, around AI-assisted visual content.
Meanwhile, for artists, regulators, and platform providers, the questions are only getting thornier. What counts as fair use in an age of style mimicry? Who decides whether a request is educational, offensive, or somewhere in between, and how do companies like OpenAI draw policy lines that are both ethically sound and commercially sustainable? The fact that ChatGPT now permits the creation of imagery that was off-limits just weeks ago, including depictions of politicians, sensitive racial traits, and even controversial symbols, now appears to reflect a broader change in how AI firms are interpreting their responsibilities.
In truth, it may be the AI market itself that forces the next evolution. With rivals like Google and Meta pursuing their own image-generation models and competing for developer mindshare, the pressure is on to deliver not just safety, but usability. OpenAI’s gamble appears to be that with the right blend of user freedom and behind-the-scenes safeguards, it can satisfy both the creative crowd and the cautious boardroom.