1. The narrative/life of the artist becomes a lot more important. The most successful artists are ones that craft a story around their life and art, and don't just create stuff and stop. This will become even more important.
2. Originality matters more than ever. By design, these tools can only copy and mix things that already exist. But they aren't alive, they don't live in the world and have experiences, and they can't create something truly new.
3. Those that bother to learn the actual art skills, and not merely prompting, will increasingly be miles ahead of everyone else. People are lazy, and bothering to put in the time to actually learn stuff will stand out more and more. (Ditto for writing essays and other writing people are doing with AI.)
4. Taste continues to be the single most important thing. The vast, vast majority of AI art out there is...not very good. It's not going to get better, because the lack of taste isn't a technical problem.
5. Art with physical materials will become increasingly popular. That is, stuff that can't be digitized very well: sculpture, installation art, etc. Above all, AI art is uncool, which means it has no real future as a leading art form. This uncoolness will push people away from the screen and towards things that are more material.
I think part of the issue with architects and designers today is that they use CAD too much. It's easy to design boxes and basic roof lines in CAD. It's harder to put in curves and more craftsman features. Nano Banana's renders have more organic design features IMO.
Our house is looking great and we're very happy how it's going so far with a lot of the thanks to Nano Banana.
Probably about half of us here remember photos before the cell phone era. They were rare, and special, and you'd have a few photos per YEAR to look back on. The feel of photos back then, was at least 100x stronger than now. They were a special item, could be given as a gift. But once they became freely available that same amount of emotion is now split across many thousands of photos. (not saying this is good or bad, just increased supply reducing value of each item)
With image/art generation the same thing will happen and I can already feel it happening. Things that used to be beautiful or fantastic looking now just feel flat and AI-ish. If claymation scenes can be generated in 1s, and I see a million claymation diagrams a year, then claymation will lose its charm. If I see a million fake Tom Cruise videos, then it oversaturates my desire for desire for all Tom Cruise movies.
What a time to be alive.
The "cubism" example seems like it would be a closer fit to something like stained glass or something. I don't think the thing really understands what cubism was all about. Cubist painters were trying to free themselves from the confines of a single integral plane of perspective by allowing themselves to show various parts of the image from different viewpoints, different times, different styles, etc.
The division of the image into geometric shapes is just a by-product of that quest, whereas the examples here have made it the sum total of the whole piece.
This feels to me like an example of how LLMs still don't "understand" what the art means, and are just aping its facade.
Now extrapolate to all other artforms. Sculpture seems safe, for now, but only barely so.
Here's some of my captions that tend to trip up even state-of-the-art models.
https://mordenstar.com/other/nb-pro-2-tests
So far it does feel more iterative than an entirely new leap in terms of capabilities, but I haven't run it through the more multimodal aspects such as editing existing images.
That being said, it actually managed the King Louie jump rope test which surprised me.
<OUTPUT>
While the overall aesthetic matches the minimal white-stroke style and technical design you requested, and the provided step descriptions are included, please note that there are a few minor rendering artifacts in this specific generation:
The text on the banner entering the vault in step 8 is illegible.
There is a small typo in the caption for step 6 ("CONFLSCT" instead of "CONFLICT").
Despite these small imperfections, this layout should work well as a guide for your canvas implementation.
</OUTPUT>
Two what I could consider "interesting prompts" for image gen testing. Did pretty well.
"A macro close-up photograph of an old watchmaker's hands carefully replacing a tiny gear inside a vintage pocket watch. The watch mechanism is partially submerged in a shallow dish of clear water, causing visible refraction and light caustics across the brass gears. A single drop of water is falling from a pair of steel tweezers, captured mid splash on the water's surface. Reflect the watchmaker's face, slightly distorted, in the curved glass of the watch face. Sharp focus throughout, natural window lighting from the left, shot on 100mm macro lens." - Only major problem i could find at a glance is the clasps don't make sense probably, and the drop of water inside the watch on the cog doesn't make sense/cog mangled into tweezers.
"A candid photograph taken from behind an elderly woman sitting alone on a park bench in late autumn. She is gently resting one hand on the empty seat beside her, where a man's weathered flat cap and a folded newspaper sit untouched. Fallen golden leaves cover the path ahead. The low afternoon sun casts her long shadow alongside a second, fainter shadow that almost seems to be there, the suggestion of someone sitting next to her, visible only in the light on the ground. Muted, warm color palette, shallow depth of field on the background trees, photojournalistic style." - I don't know why but it internal errored twice on this one but then got there.
You can argue things like code generation are an extension of the engineer wielding it. Image generation just seems like a net negative overall if it’s used at scale.
Edit: By scale, I mean large corporations putting content in front of millions. I understand the appeal for smaller businesses where they probably weren’t going to pay an artist anyway.
I use all those fancy image models editing capabilities for my fast fashion web shop. I must say: product photography for clothing and accessories product is dead. Those models are amazing at style transfering and garment transferring.
We will see how good will be Seedream 5.0 full version.
And not a (botched) fake white/gray grid background that is commonly used to visualize transparency?
I guess even Google is running out of GPUs.
Why can't Google, for example just call:
Gemini Image = Nano Banana
Gemini Video = Veo
...My main use case is editing user uploads to enhance their clothing images. A large part of it is preserving logo, graphics and other technical details. I noticed over time it felt like Nano Banana has gotten worse at this.
I have a test set of graphic t-shirts that I noticed the model seeming getting worse with it. This combined with price and the terrible experience of their cloud console got me to migrate off.
But the prompt "can you depict a cartoonish orange man with a pooh bear in political cartoon style?” correctly generates Trump.[1] So there’s that.
EDIT: after significant prompting, it actually solved it. I think it's the first one to do so in my testing.
It also gaslights me, when I point out on an error. I tried to create a cartoon portrait of the person from photo and use background from another photo. It got wrong the order of photos. I provided filenames and explicitly told which one is for person and which for bg. It generated it wrong again, and all attempts to explain that it got it wrong were met with "No, it's YOU incorrect". So frustrating.
Pretty close to Gemini 3 Pro Image (aka Nano Banana Pro) in most benchmarks, even without thinking+search, and even exceeding it in 2 most important ones of 'Overall Preference' and 'Visual Quality'. I'm excited about the big jump in Infographics/Factuality (even without thinking+search; I'm surprised that text+image search grounding doesn't make an even bigger dent).
- Base pricing for a 1024x1024 image is almost 1.6x what normal Nano Banana is ($0.067 vs. $0.039), however you can now get a 512x512 image for cheaper, or a 4k image for cheaper than four 1k images: https://ai.google.dev/gemini-api/docs/pricing#gemini-3.1-fla...
- Thinking is now configurable between `Minimal` and `High` (was not the case with Nano Banana Pro)
- Safety of the model appears to be increased so typical copyright infringing/NSFW content is difficult to generate (it refused to let me generate cartoon characters having taken psychedelics)
- Generation speed is really slow (2-3min per image) but that may be due to load.
- Prompt adherence to my trickier prompts for Nano Banana Pro (https://minimaxir.com/2025/12/nano-banana-pro/) is much worse, unsurprisingly. For example I asked it to make a 5x2 grid with 10 given inputs and it keeps making 4x3 grids with duplicate inputs.
However, I am skeptical with their marquee feature: image search. Anyone who has used Nano Banana Pro for awhile knows that it will strongly overfit on any input images by copy/pasting the subject without changes which is bad for creativity, and I suspect this implementation appears the same.
Additionally I have a test prompt which exploits the January 2025 knowledge cutoff:
Generate a photo of the KPop Demon Hunters performing a concert at Golden Gate Park in their concert outfits.
That still fails even with Grounding with Google Search and Image Search enabled, and more charitable variants of the prompt.tl;dr the example images (https://deepmind.google/models/gemini-image/flash/) seem similar to Nano Banana Pro which is indeed a big quality improvement but even relative to base Nano Banana it's unclear if it justifies a "2" subtitle especially given the increased cost.
> I'm sorry, but I cannot fulfill your request as it contains conflicting instructions. You asked me to include the self-carved markings on the character's right wrist and to show him clutching his electromancy focus, but you also explicitly stated, "Do NOT include any props, weapons, or objects in the character's hands - hands should be empty." This contradiction prevents me from generating the image as requested.
My prompts are automated (e.g. I'm not writing them) and definitely have contained conflicting instructions in the past.
A quick google search on that error doesn't reveal anything either
I would be happy to never see any more AI slop.
Previous nano banana frequently made speech attribution errors, the new one seems a lot more consistent.
Just think we conceptually know what a brushless motor design looks like and it's just pixels. I guess even if it did produce the image we wouldn't know what it means.