Anybody using AI tools should be extremely cautious about what is being produced.
There are a lot of smart and talented people working hard to embed Hasbara into LLMs.
For example, they will occasionally replace "colour" with "color". Why? Because both occur in the training data in the "same role" but "color" is, apparently, more common[1]. You can also trick them into replacing things like "sardines" with "anchovies" (on pizza) and "head of lettuce" with "cabbage" in the context of rowboats.
They are lossy text compressing parrots and we are all suffering from a massive madness-of-crowds scale Eliza Effect.
[1] Yep. https://books.google.com/ngrams/graph?content=color%2C+colou...