by ceroxylon
22 subcomments
- Google has been stomping around like Godzilla this week, and this is the first time I decided to link my card to their AI studio.
I had seen people saying that they gave up and went to another platform because it was "impossible to pay". I thought this was strange, but after trying to get a working API key for the past half hour, I see what they mean.
Everything is set up, I see a message that says "You're using Paid API key [NanoBanano] as part of [NanoBanano]. All requests sent in this session will be charged." Go to prompt, and I get a "permission denied" error.
There is no point in having impressive models if you make it a chore for me to -give you my money-
by vunderba
10 subcomments
- Alright results are in! I've re-run all my editing based adherence related prompts through Nano Banana Pro. NB Pro managed to successfully pass SHRDLU, the M&M Van Halen test (as verified independently by Simon), and the Scorpio street test - all of which the original NB failed.
Model results
1. Nano Banana Pro: 10 / 12
2. Seedream4: 9 / 12
3. Nano Banana: 7 / 12
4. Qwen Image Edit: 6 / 12
https://genai-showdown.specr.net/image-editingIf you just want to see how NB and NB Pro compare against each other:
https://genai-showdown.specr.net/image-editing?models=nb,nbp
by minimaxir
11 subcomments
- I...worked on the detailed Nano Banana prompt engineering analysis for months (https://news.ycombinator.com/item?id=45917875)...and...Google just...Google released a new version.
Nano Banana Pro should work with my gemimg package (https://github.com/minimaxir/gemimg) without pushing a new version by passing:
g = GemImg(model="gemini-3-pro-image-preview")
I'll add the new output resolutions and other features ASAP. However, looking at the pricing (https://ai.google.dev/gemini-api/docs/pricing#standard_1), I'm definitely not changing the default model to Pro as $0.13 per 1k/2k output will make it a tougher sell.EDIT: Something interesting in the docs: https://ai.google.dev/gemini-api/docs/image-generation#think...
> The model generates up to two interim images to test composition and logic. The last image within Thinking is also the final rendered image.
Maybe that's partially why the cost is higher: it's hard to tell if intermediate images are billed in addition to the output. However, this could cause an issue with the base gemimg and have it return an intermediate image instead of the final image depending on how the output is constructed, so will need to double-check.
- This thing's ability to produce entire infographics from a short prompt is really impressive, especially since it can run extra Google searches first.
I tried this prompt:
Infographic explaining how the Datasette open source project works
Here's the result: https://simonwillison.net/2025/Nov/20/nano-banana-pro/#creat...
- Something I find weird about AI image generation models is that even though they no longer produce weird "artifacts" that give away that the fact that it was AI generated, you can still recognize that it's AI due to stylistic choices.
Not all examples they gave were like this. The example they gave of the word "Typography" would have fooled me as human-made. The infographics stood out though. I would have immediately noticed that the String of Turtles infographic was AI generated because of the stylistic choices. Same for the guide on how to make chai. I would be "suspicious" of the example they gave of the weather forecast but wouldn't immediately flag at as AI generated.
Similar note, earlier I was able to tell if something was AI generated right off the bat by noticing that it had a "Deviant Art" quality to it. My immediate guess is that certain sources of training data are over-represented.
by theoldgreybeard
22 subcomments
- The interesting tidbit here is SynthID. While a good first step, it doesn't solve the problem of AI generated content NOT having any kind of watermark. So we can prove that something WITH the ID is AI generated but we can't prove that something without one ISN'T AI generated.
Like it would be nice if all photo and video generated by the big players would have some kind of standardized identifier on them - but now you're left with the bajillion other "grey market" models that won't give a damn about that.
by meetpateltech
0 subcomment
- Developer Blog: https://blog.google/technology/developers/gemini-3-pro-image...
DeepMind Page: https://deepmind.google/models/gemini-image/pro/
Model Card: https://storage.googleapis.com/deepmind-media/Model-Cards/Ge...
SynthID in Gemini: https://blog.google/technology/ai/ai-image-verification-gemi...
by dangoodmanUT
6 subcomments
- I've had nano banana pro for a few weeks now, and it's the most impressive AI model I've ever seen
The inline verification of images following the prompt is awesome, and you can do some _amazing_ stuff with it.
It's probably not as fun anymore though (in the early access program, it doesn't have censoring!)
- It's crazy how good these models are at text now. Remember when text was literally impossible? Now the models can diagetically render any text. It's so good now that it seems like a weird blip that it _wasn't_ possible before.
Not to mention all the other stuff.
by mortenjorck
3 subcomments
- This is the first image model I’ve used that passed my piano test. It actually generated an image of a keyboard with the proper pattern of black keys repeated per octave – every other model I’ve tried this with since the first Dall-E has struggled to render more than a single octave, usually clumping groups of two black keys or grouping them four at a time. Very impressive grasp of recursive patterns.
by indigodaddy
5 subcomments
- I don't understand the excitement around generating and/or watching AI-produced videos. To me it's probably the single most uninteresting and boring thing related to AI that I can think of. What is the appeal?
- SynthID seems interesting but in classic Google fashion, I haven't a clue on how to use it and the only button that exists is join a waitlist. Apparently it's been out since 2023? Also, does SynthID work only within gemini ecosystem? If so, is this the beginning of a slew of these products with no one standard way? i.e "Have you run that image through tool1, tool2, tool3, and tool4 before deciding this image is legit?"
edit: apparently people have been able to remove these watermarks with a high success rate so already this feels like a DOA product
by TheAceOfHearts
2 subcomments
- You can try it out for free on LMArena [0]: New Chat -> Battle dropdown -> Direct Chat -> Click on Generate Image in the chat box -> Click dropdown from hunyuan-image-3.0 -> gemini-3-pro-image-preview (nano-banana-pro).
I've only managed to get a few prompts to go through, if it takes longer than 30 seconds it seems to just time out. Image quality seems to vary wildly; the first image I tried looked really good but then I tried to refresh a few times and it kept getting worse.
[0] lmarena.ai/
by fouronnes3
5 subcomments
- I guess the true endgame of AI products is naming them. We still have quite a way to go.
by evrenesat
8 subcomments
- I've tried to repaint the exterior of my house. More than 20 times with very detailed prompts. I even tried to optimize it with Claude. No matter what, every time it added one, two or three extra windows to the same wall.
by throwacct
13 subcomments
- Google needs to pace themselves. AI studio, Antigravity, Banana, Banana Pro, Grape Ultra, Gemini 3, etc. This information overload don't do them any good whatsoever.
- Does anyone know if this is predicting the entire image at once, or if it's breaking it into constituent steps i.e. "draw text in this font at this location" and then composing it from those "tools"? It would be really interesting if they've solved the garbled text problem within the constraint of predicting the entire image at once.
by scottlamb
4 subcomments
- The rollout doesn't seem to have reached my userid yet. How successful are people at getting these things to actually produce useful images? I was trying recently with the (non-Pro) Nano Banana to see what the fuss was about. As a test case, I tried to get it to make a diagram of a zipper merge (in driving), using numbered arrows to indicate what the first, second, third, etc. cars should do.
I had trouble reliably getting it to...
* produce just two lanes of traffic
* have all the cars facing the same way—sometimes even within one lane they'd be facing in opposite directions.
* contain the construction within the blocked-off area. I think similarly it wouldn't understand which side was supposed to be blocked off. It'd also put the lane closure sign in lanes that were supposed to be open.
* have the cars be in proportion to the lane and road instead of two side-by-side within a lane.
* have the arrows go in the correct direction instead of veering into the shoulder or U-turning back into oncoming traffic
* use each number once, much less on the correct car
This is consistent with my understanding of how LLMs work, but I don't understand how you can "visualize real-time information like weather or sports" accurately with these failings.
Below is one of the prompts I tried to go from scratch to an image:
> You are an illustrator for a drivers' education handbook. You are an expert on US road signage and traffic laws. We need to prepare a diagram of a "zipper merge". It should clearly show what drivers are expected to do, without distracting elements.
> First, draw two lanes representing a single direction of travel from the bottom to the top of the image (not an entire two-way road), with a dotted white line dividing them. Make sure there's enough space for the several car-lengths approaching a construction site. Include only the illustration; no title or legend.
> Add the construction in the right lane only near the top (far side). It should have the correct signage for lane closure and merging to the left as drivers approach a demolished section. The left lane should be clear. The sign should be in the closed lane or right shoulder.
> Add cars in the unclosed sections of the road. Each car should be almost as wide as its lane.
> Add numbered arrows #1–#5 indicating the next cars to pass to the left of the "lane closed" sign. They should be in the direction the cars will move: from the bottom of the illustration to the top. One car should proceed straight in the left lane, then one should merge from the right to the left (indicate this with a curved arrow), another should proceed straight in the left, another should merge, and so on.
I did have a bit better luck starting from a simple image and adding an element to it with each prompt. But on the other hand, when I did that it wouldn't do as well at keeping space for things. And sometimes it just didn't make any changes to the image at all. A lot of dead ends.
I also tried sketching myself and having it change the illustration style. But it didn't do it completely. It turned some of my boxes into cars but not necessarily all of them. It drew a "proper" lane divider over my thin dotted line but still kept the original line. etc.
by smusamashah
0 subcomment
- This is what the SynthID signature looks like on Nano Banana images https://www.reddit.com/r/nanobanana/comments/1o1tvbm/nano_ba...
And if it can be seen like that, it should be removeable too. There are more examples in that thread.
by mark_l_watson
0 subcomment
- I used the new Nano Banana Pro just now, indirectly. I was brainstorming with Gemini 3 Thinking mode (now the default best thinking option on my iPadOS Gemini app) over a system design for an open source project that I hope to put a lot of effort into next year and then I asked for a detailed system level diagram.
The results were very good because the diagram reflected what I had specified during chat.
I probably sounded like an idiot when Gemini 3 was released: I have been a paid ‘AI practitioner’ since 1982, lived through multiple AI winters, but I wrote this week that Gemini 3 meets my personal expectations for AGI for the non-physical (digital) world.
by AmbroseBierce
6 subcomments
- 2D animators can still feel safe about their job, I asked it to generate a sprite sheet animation by giving it the final frame of the animation (as a PNG file) and asking in detail what I wanted in the spritesheet, it just gave me mediocre results, I asked for 8 frames and it just repeated a bunch of poses just to reach that number instead of doing what a human would have done with the same request, meaning the in-betweens to make the animation smoother (AKA interpolations)
- Just last night I was using Gemini "Fast" to test its output for a unique image we would have used in some consumer research if there had been a good stock image back in the day. I have been testing this prompt since the early days of AI images. The improvement in quality has been pretty remarkable for the same prompt. Composition across this time has been consistent. What I initially thought was "good enough" now is... fantastic. Just so many little details got more life-like w/ each new generation. Funnily enough, our images must be 3:2 aspect ratio. I kept asking GFast to change its square Fast output to 3:2. It kept saying it would, but each image was square or nearly square. GFast in the end was very apologetic, and said it would alert about this issue. Today I read that GPro does aspect ratios. Tried the same prompt again burning up some "Thinking" credits, and got another fantastically life-like image in 3:2. We have a new project coming up. We have relied entirely on stock or in some cases custom shot images to date. Now, apart from the time needed to get the prompts right whilst meeting with the client, I cannot see how stock or custom images can compete. I mean the GPro images -- again which is very specific to an unusual prompt -- is just "Wow". Want to emphasize again -- we are looking for specific details that many would not. So the thoughts above are specific to this. Still, while many faults can be found with AI, Nano Banana is certainly proven itself to me.
edit: I was thinking about this, and am not sure I even saw Pro3 as my image option last night. Today it was clearly there.
by ZeroCool2u
3 subcomments
- I tried the studio ghibli prompt on a photo my me and my wife in Japan and it was... not good. It looked more like a hand drawn sketch made with colored pencils, but none of the colors were correct. Everything was a weird shade of yellow/brown.
This has been an oddly difficult benchmark for Gemini's NB models. Googles images models have always been pretty bad at the studio ghibli prompt, but I'm shocked at how poorly it performs at this task still.
- This is super awesome, but how in the world did they come up with a name "Nano Banana Pro"? It sounds like an April Fools joke.
- I wonder how hard it is to remove that SynthID watermark...
Looks like: "When tested on images marked with Google’s SynthID, the technique used in the example images above, Kassis says that UnMarker successfully removed 79 percent of watermarks." From https://spectrum.ieee.org/ai-watermark-remover
by CSMastermind
1 subcomments
- There's some really impressive things about this (the speed, the lack of typical AI image gen artifacts) but it also seems less creative than other models I've tried?
"mountain dew themed pokemon" is the first search prompt I always try with new image models and Nano Banna Pro just gave me a green pikachu.
Other models do a much better job of creating something new.
by al_be_back
0 subcomment
- A houseplant with tiny turtles for leaves… very informative if under the influence of some substances.
It’s not a Hello World equivalent.
So much around generative ai seems to be around “look how unrealistic you can be for not-cheap! Ai - cocaine for your machine!!”
No wonder there’s very little uptake by businesses (MIT state of ai 2025, etc)
- I'll be running it through my GenAI Comparison benchmark shortly - but so far it seems to be failing on the same tests that the original Nano Banana struggled with (such as SHRDLU).
https://genai-showdown.specr.net/image-editing
- The funny part is that Google puts watermark on the generated graphics, because they are oh so not evil and socially responsible.
Unless you pay Google more, what is mentioned at the very bottom of this infomercial.
"Recognizing the need for a clean visual canvas for professional work, we will remove the visible watermark from images generated by Google AI Ultra subscribers and within the Google AI Studio developer tool."
BTW: anyone with the skills found in 1 min on the Internet can remove all of those ids, etc. (yes, as you might guess, the website is called remove synth id dot com...)
- Gemini is all over the place for me. Nano Banana produces some great images. Today I asked Gemini to design a graphic based on the first sheet in a Google sheet. It produced a graphic with a summary of the data and a picture of a bed sheet. Nailed it.
by anentropic
2 subcomments
- Is there an "in joke" to this name that I am too old to get? Or it's just a whimsically random name?
- First model I've seen that was consistently compositional, easily handling requests like
“Generate an image of an african elephant painted in the New England flag, doing a backflip in front of the russian federal assembly.”
OpenAI made the biggest step change towards compositionality in image generation when they started directly generating image tokens for decoders from foundation llms, and it worked very well (openais images were better in this regard than nano banana 1, but struggled with some OOD images like elephants doing backflips), but banana 2 nails this stuff in a way I haven't seen anywhere else
if video follows the same trends as images in terms of prompt adherence, that will be very valuable... and interesting
by H1Supreme
1 subcomments
- This is really impressive. As a former designer, I'm equally excited that people will be able to generate images like this with a prompt, and sad that there will be much less incentive for people to explore design / "photoshopping" as a craft or a career.
At the end of the day, a tool is a tool, and the computer had the same effect on the creative industry when people started using them in place of illustrating by hand, typesetting by hand, etc. I don't want my personal bias to get in the way too much, but every nail that AI hammers into the creative industry's coffin is hard to witness.
by bespokedevelopr
0 subcomment
- It’s interesting, I’m trying to use it to create a themed collage by providing a few images and it does that wonderfully, but in the process it is also hallucinating the images I use so I end up with weird distorted faces. Other tools can do this without issue, but something about faces in images this model just has to modify them every time. Ask it to remove background objects and the faces get distorted as well.
Using it for non-people involved images and it’s pretty good although I haven’t done much and it isn’t doing anything 2.5-flash wasn’t already doing in the same amount of requests.
- I feel like I am going crazy or missed something simple but when I use the Gemini app and I ask it to edit a photo that I upload, 2.5 flash works really well but 2.5 pro or 3.0 pro do a very poor job. I uploaded an image of me and asked it to make me bald and flash did a great job of just changing me in the photo but 3.0 pro took me out of the photo completely and just created a headshot of a bald man that only sort of resembled me. Am I missing something or does paying for the pro version not give you anything over the 2.5 flash model?
by into_the_void
1 subcomments
- Is SynthID actually running an AI classifier to decide whether an image is model-generated, or is it only checking for an embedded watermark? If it’s a classifier, the accuracy is questionable — generic “AI detection” tools tend to produce high false-positive rates. Also unclear whether it’s doing semantic anomaly checks (extra fingers, physics errors) or low-level pixel-signature analysis.
by chaosprint
0 subcomment
- In my limited testing, at least in terms of maintaining consistency between input and output for Asian faces, it has even regressed.
Actually, Gemini 3 is about the same, and doesn't feel as good as Claude 4.5. I have a feeling it's been fine-tuned for a cool front-end marketing effect.
Furthermore, I really don't understand why AI Studio, now requiring me to use its own API for payment, still adds a watermark.
by Shalomboy
1 subcomments
- The SynthID check for fishy photos is a step in the right direction, but without tighter integration into everyday tooling its not going to move the needle much. Like when I hold the power button on my Pixel 9, It would be great if it could identify synthetic images on the screen before I think to ask about it. For what its worth it would be great if the power button shortcut on Pixel did a lot more things.
- It's great to know that Nano Banana pro get's multiple items of my impossible AIGC benchmark done....https://github.com/tianshuo/Impossible-AIGC-Benchmark
by sarbajitsaha
1 subcomments
- Slightly off topic, but how are people creating long videos like 30 second videos that I often see on Instagram? It I try to use Veo to make split videos, it simply cannot maintain the style or weird quirks get into the subsequent videos. Is there anything else that's the best video generation model currently other than Veo?
- With this model, I'm more worried about future online fraud. Will there still be authenticity?
by user34283
1 subcomments
- The visual quality of photorealistic images generated in the Gemini app seems terrible.
Like really ugly. The 1K output resolution isn't great, but on top of that it looks like a heavily compressed JPEG even at 100% viewing size.
Does AI Studio have the same issue? There at least I can see 2K and 4K output options.
by visioninmyblood
2 subcomments
- Wow! I was able to combine Nano Banana Pro and Veo 3.1 video generation in a single chat and it produced great results. https://chat.vlm.run/c/38b99710-560c-4967-839b-4578a4146956. Really cool model
by 1970-01-01
0 subcomment
- The naming is somehow getting worse. I swear we will soon see models that are named just with emojis.
by embedding-shape
2 subcomments
- I tried the same prompt as one of the examples (https://i.imgur.com/iQTPJzz.png), in the two ways they say you can run it, via Google Gemini and Google AI Studio (I suppose they're different somehow?). The prompt was "Create an infographic that shows hot to make elaichi chai" and Google Gemini created a infographic (https://i.imgur.com/aXlRzTR.png), but it was all different from what the example showed. Google AI Studio instead created a interactive website, again with different directions: https://i.imgur.com/OjBKTkJ.png
There is not a single mention about accuracy, risks or anything else in the blogpost, just how awesome the thing is. It's clearly not meant to be reliable just yet, but not making this clear up front. Isn't this almost intentionally misleading people, something that should be illegal?
by eminence32
4 subcomments
- > Generate better visuals with more accurate, legible text directly in the image in multiple languages
Assuming that this new model works as advertised, it's interesting to me that it took this long to get an image generation model that can reliably generate text. Why is text generation in images so hard?
- I was just playing with the non-pro version of this and it seems to add both a Gemini and Disney watermark. Presumably this was because I referenced beauty and the beast.
Anyone know if this is an hallucination or if they have some kind of deal with content owners to add branding?
by cyrusradfar
1 subcomments
- I really hope Google reads these HN posts. They've had some big "product" wins but the pricing, packaging, and user system is a severe blocker to growth. If developers can't or won't figure it out -- how the heck are consumers?
by big-chungus4
0 subcomment
- Guys, Nano Banana Pro is a new image model by Google which generated images for me
- My experience with Nano Banana is to constantly get consistent image when dealing with muliple objects in a image, I mean creating consistent sequence etc.
We spent a lot of money trying but eventully gave up. If it is easier in Pro, then probably it stands a chance.
- you know whats annoying? each iteration the quality of the first original image gets worse and worse until it loses resolution , details etc.
by visioninmyblood
2 subcomments
- If Nano-Banana-pro with Veo 3.1 existed during my PhD, I would’ve finished a 6-year dissertation in a single year — it’s generating ideas today that used to take me 18 months just to convince people were possible.
- Can anyone please explain me the invisible watermarking mentioned in the said promo?
- Generated images still contain JPEG artifacts all over them.
We are not doomed yet - can pretty much reliably spot RAW image vs AI-generated image by just zooming in
- Will be interesting to see how this model performs in real-world creative tasks. https://creativearena.ai/
- What can nano-banana do that chatGPT made images can't? Or is it only better for image editing from what I can gather from these comments so far. I haven't used it so genuinely curious.
- Interesting they didn’t post any benchmark results - lmarena/artificial analysis etc. I would’ve thought they’d be testing it behind the scenes the same way they did with Gemini 3.
by jasonjmcghee
7 subcomments
- Maybe I'm an obscure case, but I'm just not sure what I'd use an image generation model for.
For people that use them (regularly or not), what do you use them for?
- Google is able to churn up SOTA models across the board. But still could not figure out the basic user journey. No Joke!
- It's a funny juxtaposition to slap the "Pro" label on it which makes it sound more enterprisey but leave the name as Nano Banana.
- Time to expand my creation catalog. Lets see what we can get of out this pro version. It seems this week is for big AI announcements from Google
by Aman_Kalwar
0 subcomment
- Really interesting. Curious what the main design motivation behind this project was and what gaps it fills compared to existing tools?
- I wouldn't trust any of the info in those images in the first carousel if I found them in the wild. It looks like AI image slop and I assume anyone who thinks those look good enough to share did not fact check any of the info and just prompted "make an image with a recipe for X"
by atom-morgan
0 subcomment
- Anyone know how to use this with Google Slides? I don't see it anywhere in app.
- When my first thought was of an SBC, then a media AI cloud product was not high up on my guess list.
by mattmaroon
0 subcomment
- Nano Banana has been the only model I’ve really loved. As a small businesses who makes products, it’s been a game changer on the marketing side. Now when I’ve got something new I need to advertise in a hurry, I take a crappy pic and fix it in that. Don’t have a perfect model ready yet? That’s ok, I can just alter to look exactly like it will.
What used to cost money and involve wait time is now free and instant.
- Oh what a day. What a lovely day.
https://www.youtube.com/watch?v=5mZ0_jor2_k
Honestly I think this is exactly how we're all feeling right now. Racing towards an unknown horizon in a nitrous powered dragster surrounded by fire tornadoes.
by shevy-java
0 subcomment
- Not gonna lie - this is pretty cool.
But ... it comes from Google. My goal is to eventually degoogle completely. I am not going to add any more dependency - I am way too annoyed at having to use the search engine (getting constantly worse though), google chrome (long story ...) and youtube.
I'll eventually find solutions to these.
by mogomogo19292
0 subcomment
- Still seems to mess up speech bubbles in comic strips unfortunately
- really missed an opportunity to name it micro banana (or milli banana). Personally I can't wait for mega banana next year.
by myth_drannon
2 subcomments
- Adobe's stock is down 50% from last year's peak. It's humbling and scary that entire industries with millions of jobs evaporate in a matter of few years.
by semiinfinitely
0 subcomment
- "Talk to your Google One Plan Manager"
wtf
- I am extremely impressed by google this week.
I dont want to be annoying, its just a small piece of feedback, but srsly why is it so hard for google to have a simple onboarding experience for paying customers?
In the past I spoke about how my whole startup got taken offline for days because I "upgraded" to paying, and that was a decade ago. I mean it cant be hard, other companies dont have these issues!
Im sure it will be fixed in time, its just a bit bizarre. Maybe its just not enough time spent on updating legacy systems between departments or something.
by Spacemolte
0 subcomment
- "Sorry, I'm still learning to create images for you, so I can't do that yet. I can try to find one on the web though."
by willsmith72
4 subcomments
- > Starting to roll out in the Gemini API and Google AI Studio
> Rolling out globally in the Gemini app
wanna be any more vague? is it out or not? where? when?
- does it handle transparency yet?
- I’ve been struggling with infographics. That’s my main use case but every tool seems to bungle the text.
- One of the things I've always been curious about is how effective diffusion models can be for web and app design. They're generally trained on more organic photos, but post-training on SDXL and Flux have given me good results here in the past (with the exception of text).
It's been interesting seeing the results of Nano Banana Pro in this domain. Here are a few examples:
Prompt: "A travel planner for an elegant Swiss website for luxury hiking tours. An interactive map with trail difficulty and booking management. Should have a theme that is alpine green, granite grey, glacier white"
Flux output: https://fal.media/files/rabbit/uPiqDsARrFhUJV01XADLw_11cb4d2...
NBP output: https://v3b.fal.media/files/b/panda/h9auGbrvUkW4Zpav1CnBy.pn...
---
Prompt: "a landing page for a saas crypto website, purple gradient dark theme. Include multiple sections, including one for coin prices, and some graphs of value over time for coins, plus a footer"
Flux output: https://fal.media/files/elephant/zSirai8mvJxTM7uNfU8CJ_109b0...
NBP output: https://v3b.fal.media/files/b/rabbit/1f3jHbxo4BwU6nL1-w6RI.p...
---
Prompt: "product launch website for a development tool, dark background with aqua blue and neon gold highlights, gradients"
Flux output: https://fal.media/files/zebra/aXg29QaVRbXe391pPBmLQ_4bfa61cc...
NBP output: https://v3b.fal.media/files/b/lion/Rj48BxO2Hg2IoxRrnSs0r.png
---
Note that this is with a lora I built for flux specifically for website generation. Overall, nbp seems to have less creative / inspired outputs, but the text is FAR better than the fever dream Flux is producing. I'm really excited to see how this changes design. At the very least it proved it can get close to a production quality for output, now it's just about tuning it.
by standardly
0 subcomment
- Anyone else think "Nano Banana" is an awful name? For some reason it really annoys me. It looks incredibly fancy, though.
by simianparrot
1 subcomments
- What is up with these product names!? Antigravity? Nano Banana?
Not just are they making slop machines, they seem to be run by them.
I am too old for this shit.
by isoprophlex
0 subcomment
- If only there was a straightforward way to pay google to use this, with a not entirely insane UX...
- Yuck. The last thing the world needs is another slop generator
by Andrew-Tate
0 subcomment
- [dead]
by Andrew-Tate
0 subcomment
- [dead]
by sherinjosephroy
0 subcomment
- [dead]
by Joshua-Peter
1 subcomments
- [flagged]
- Cool, but it's still unusable for me. Somehow all my prompts are violating the rules, huh?
by egypturnash
8 subcomments
- Everyone who worked on this is a traitor to the human race. Why do we need to make it impossible to make a living as an artist? Who thinks an endless tsunami of garbage “content” churned out by machines dropping the bottom out of all artistic disciplines is a good idea?
- Can Google Gemini 3 check Google Flights for live ticket prices yet?
(The Gemini 3 post has a million comments too many to ask this now)
- Nano Banana Pro sounds like classic Google branding: quirky name, serious tech underneath. I’m curious whether the “Pro” here is about actual professional‑grade features or just marketing polish. Either way, it’s another reminder that naming can shape expectations as much as specs.