What I found surprising is I didnt even have one sale. Somehow someone had notified Nintendo AND my shop had been taken down, to sell merch that didn't even exist for the market and if I remember correctly - also it didnt even have any imagery on it or anything trademarkable - even if it was clearly meant for pokmeonGo fans.
Im not bitter I just found it interesting how quick and ruthless they were. Like bros I didn't even get a chance to make a sale. ( yes and also I dont think I infringed anything).
That said, trademark laws like life of the author + 95 years are absolutely absurd. The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property. The reasoning being that if you don't allow people to exclude 3rd party copying, then the primary party will assumedly not receive compensation for their creation and they'll never create.
Even in the case where the above is assumed true, the length of time that a protection should be afforded should be no more than the length of time necessary to ensure that creators create.
There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years. I wouldn't be surprised if it was the same for 1 year past death.
For that matter, this argument extends to other criminal penalties, but that's a whole other subject.
I really, really hope the multimedia-megacorps get together and class-action ChatGPT and every other closed, for-profit LLM corporation into oblivion.
There should not be a two-tier legal system. If it's illegal for me, it's illegal for Sam Altman.
Get to it.
I feel like the less advanced generations, maybe even because of their limitations in terms of size, were better at coming up with something that at least feels new.
In the end, other than for copyright-washing, why wouldn't I just use the original movie still/photo in the first place?
One thing I would say, it's interesting to consider what would make this not so obviously bad.
Like, we could ask AI to assess the physical attributes of the characters it generated. Then ask it to permute some of those attributes. Generate some random tweaks: ok but brawy, short, and a different descent. Do similarly on some clothing colors. Change the game. Hit the "random character" button on the physical attributes a couple times.
There was an equally shatteringly-awful less-IP-theft (and as someone who thinks IP is itself incredibly ripping off humanity & should be vastly scoped down, it's important to me to not rest my arguments on IP violations).... An equally shattering recent incident for me. Having trouble finding it, don't remember the right keywords, but an article about how AI has a "default guy" type that it uses everywhere, a super generic personage, that it would use repeatedly. It was so distasteful.
The nature of 'AI as compression', as giving you the most median answer is horrific. Maybe maybe maybe we can escape some of this trap by iterating to different permutations, by injecting deliberate exploration of the state spaces. But I still fear AI, worry horribly when anyone relies on it for decision making, as it is anti-intelligent, uncreative in extreme, requiring human ingenuity to budge off its rock of oppressive hypernormality that it regurgitates.
I found apple's tool frustrating. I have a buzzed haircut, but no matter what I did, apple was unable to give me that hairstyle. It wants so bad for my avatar to have some longer hair to flourish, and refuses to do anything else.
> Hayao Miyazaki’s Japanese animation company, Studio Ghibli, produces beautiful and famously labor intensive movies, with one 4 second sequence purportedly taking over a year to make.
It makes me wonder though - whether it’s more valuable to spend a year on a scene that most people won’t pay that much attention to (artists will understand and appreciate, maybe pause and rewind and replay and examine the details, the casual viewer just enjoy at a glance) or use tools in addition to your own skills to knock it out of the park in a month and make more great things.
A bit how digital art has clear advantages over paper, while many revere the traditional art a lot, despite it taking longer and being harder. The same way how someone who uses those AI assisted programming tools can improve their productivity by getting rid of some of the boilerplate or automate some refactoring and such.
AI will definitely cheapen the art of doing things the old way, but that’s the reality of it, no matter how much the artists dislike it. Some will probably adapt and employ new workflows, others stick to tradition.
The author is so generous... but Sam Altman literally has a Ghibli-fied Social profile and in response to all this said OpenAI chooses its demos very carefully. His primary concern is that Ghibli-fying prompts are over-consuming their GPU resources, degrading the service by preventing other ChatGPT tasks.
It's very clear that generative has abandoned the idea of creative; image production that just replicates the training data only serves to further flatten our idea of what the world should look like.
Current generation of AI models can't think of anything truly new. Everything is simply a blend of prior work. I am not saying that this doesn't have economic value, but it means these AI models are closer to lossy compression algorithms than they are to AGI.
The following quote by Sam Altman from about 5 years ago is interesting.
"We have made a soft promise to investors that once we build this sort-of generally intelligent system, basically we will ask it to figure out a way to generate an investment return."
That's a statement I wouldn't even dream about making today.
Though I am also generally opposed to the notion of intellectual property whatsoever on the basis that it doesn't seem to serve its intended purpose and what good could be salvaged from its various systems can already be well represented with other existing legal concepts, i.e deceptive behaviors being prosecuted as forms of fraud.
https://chatgpt.com/share/67efebf4-3b14-8011-8c11-8f806c7ff6...
> Does the growth of AI have to bring with it the tacit or even explicit encouragement of intellectual theft?
And like, yes, 100% - what else is AI but a tool for taking other people's work and reassembling it into a product for you without needing to pay someone. Do you want an awesome studio ghibli'd version of yourself? There are thousands of artists online that you could commission for a few bucks to do it that'd probably make something actually interesting - but no, we go to AI because we want to avoid paying a human.
The real issue here is that there's a whole host of implied context in human languages. On the one hand, we expect the machine to not spit out copyrighted or trademarked material, but on the other hand, there's a whole lot of cultural context and implied context that gets baked into these things during training.
Unfortunately, it's just the opposite. It seems most people have fully assimilated the idea that information itself must be entirely subsumed into an oppressive, proprietary, commercial apparatus. That Disney Corp can prevent you from viewing some collection of pixels, because THEY own it, and they know better than you do about the culture and communication that you are and are not allowed to experience.
It's just baffling. If they could, Disney would scan your brain to charge you a nickel every time you thought of Mickey Mouse.
Or, (2) LLMs are creative and do have agency, and feeding them bland prompts doesn't get their juices flowing. Copyright isn't a concern, the model just regurgitated a cheap likeness of Indiana Jones as Harrison Ford the world has seen ad nauseam. You'd probably do the same thing if someone prompted you the same way, you lazy energy conserving organism you.
In any case, perhaps the idea "cheap prompts yield cheap outputs" holds true. You're asking the model respond to the entirely uninspired phrase: "an image of an archeologist adventurer who wears a hat and uses a bullwhip". It's not surprising to me that the model outputs a generic pop-culture-shaped image that looks uncannily like the most iconic and popular rendition of the idea: Harrison Ford.
If you look at the type of prompts our new generation of prompt artists are using over in communities like Midjourney, a cheap generic sentence doesn't cut it.
Now, what if I get the highest fidelity speakers and the highest fidelity microphone I can and play that song in my home. Then I use a deep learned denoiser to clean the signal and isolate the song’s true audio. Is this theft?
The answer does not matter. The genie is out of the bottle.
There’s no company like Napster to crucify anymore when high quality denoising models are already prior art and can be grown in a freaking Jupyter notebook.
I received so many Copyright and DMCA takedowns for early youtube videos posted in the early 2010's for no reason except some background music blaring a hit. It had millions of views and NO ADs. Now the ad-infested copies with whatever tricks they use can still be found, while my videos predating all had to be deleted. Google. You like their product? Don't like it too much, it may cease to exist, or maybe just for you for arbitrary reasons and simultaneously remove your access to hundreds of websites via their monopoly on Single-Sign-On.
Then there are those takedown notices for honest negative reviews on Google Maps by notorious companies having accumulated enough money via scams that they now can afford to hire lawyers. These lawyers use their tools and power to manipulate the factual scoring into a "cleansed one".
OpenAI seriously has not received any court orders from all the movie studios in the world? How is that even possible?
I previously posted in a comment that I have video evidence with a friend being eye witness how OpenAI is stealing data. How? Possibly by abusing access granted by Microsoft.
Who is still defending OpenAI and why? There are so many highly educated and incredibly smart people here, this is one of the most glaring and obvious hardcore data/copyright violations, yet OpenAI roams free. It's also the de-facto most ClosedAI out there.
OpenAI is: - Accessing private IP & data of millions of organisations - Silencing and killing whitleblowers like Boing - Using $500B tax-payer money to produce closed source AI - Founder has lost it and straight up wants to raise trillion(s)
For each of these claim there is easily material that can be linked to prove it, but some like ChatGPT and confuse the usefulness of it with the miss-aligned and bad corporate behaviour of this multi-billion dollar corporation.
[ Challenge Image: An aquarium full of baby octopodes, containing a red high-heeled slipper in the center and a silver whistle hanging from a fern on the right-hand side ]
Then the contests have to come up (under pressure, of course) with a prompt that produces their own rendition of that image, and the game will decide if their image contains enough of the elements of the original to score a point.
Indeed, this phenomenon among normal or true intelligences (us) is thought to be a good thing by copyright holders and is known as "brand recognition".
Intelligences -- the normal, biological kind -- are capable of copyright infringement. Why is it a surprise that artificial ones can help us do so was well?
This argument boils down to "oh no, a newly invented tool can be used for evil!". That's how new power works. If it couldn't be used for both good and evil, it's not really power, is it?
> It's a jeopardy machine. You give it the clue, and it gives you the obvious answer.
Incredibly lucid analogy.
I'm not sure why style was the hangup here, isn't it clearly that it's AI generated? I'm sure two weeks ago a human making the same picture would be obviously worth crediting.
This image generation is a tool like any other tool. If the image generator generates an image of Mickey Mouse or if I draw Mickey Mouse by hand in photoshop, I can't use it commercially either way.
So what exactly is new or different here?
Mmm, kinda, but those image results only don't show 1,000 of the exact same image before showing anything else because they're tuned to avoid showing too many similar images. If you use one without that similarity-avoidance baked in, you see it immediately. It's actually super annoying if what you're trying to find is in fact variations on the same image, because they'll go way out of their way to avoid doing that, though some have tools for that ("show me more examples of images almost exactly like this one" sorts of tools)
The data behind the image search, before it goes through a similarity-classifier (or whatever) and gets those with too-close a score filtered out (or however exactly it works) probably looks a lot like "huh, every single adventurer with a hat just looks exactly like Harrison Ford?"
There's similar diversity-increasers at work on search results, it's why you can search "reddit [search terms]" on DDG and exactly the first 3 results are from reddit (without modifying the search to limit it to the site itself, just using it as a keyword) but then it switches to giving you other sites.
I don't see why an AI can't generate IP, even if the AI is being sold. What's not allowed is selling the generated IP.
Style is even more permissive: you're allowed to sell something in any style. AFAIK the only things that can be restricted are methods to achieve the style (via patents), similar brands in similar service categories (via trademarks), and depictions of objects (via copyrights).
Note that something being "the original" gives it an intrinsic popularity advantage, and someone being "the original creator" gives their new works an intrinsic advantage. I believe in attribution, which means that if someone recreates or closely derives another's art or style, they should point to the original**. With attribution, IP is much less important, because a recreation or spin-off must be significantly better to out-compete the original in popularity, and even then, it's extra success usually spills onto the original, making it more popular than it would be without the recreation or spin-off anyways. Old books, movies, and video games like Harry Potter, Star Wars, and Sonic have many "recreations" which copy all but their IP, and fan recreations which copy even that; yet they're far more popular than all the recreations, and when Warner Bros, Disney, or SEGA release a new installment, the new installment is far more popular too, simply because it's an original.
* IANAL, maybe there are some technicalities, but in practice this is true.
** Or others can do it. As long as it shows up alongside their work, so people don't see the recreation or close derivation without knowing about the original.
Harrison Ford's head is way too big for his body. Same with Alicia Vikander's and Daniel Craig's too. Daniel Craig is way too young too. Bruce Willis's just looks fake, and he's holding his lighter in the opposite hand from the famous photo.
So it's not reproducing any actual copyrighted images directly. It's more like an artist trying to paint from memory. Which seems like an important distinction.
For all of the examples, I knew what image to expect before seeing it. I think it's the user who is at fault for requesting a copyrighted image, not the LLM for generating it. The LLM is giving (nearly) exactly what the user expects.
LLMs are better at generating the boilerplate of todays programming languages than they will be with tomorrows programming languages.
This is because not only will tomorrows programming languages be newer and lacking in corpus to train the models in but, by the time a corpus is built, that corpus will consist largely of LLM hallucinations that got checked into github!?
The internet that that has been trawled to train the LLMs is already largely SEO spam etc, but the internet of the future will be much more so. The loop will feed into itself and become ever worse quality.
“An insult to life itself”: Hayao Miyazaki critiques an animation made by artificial intelligence
https://qz.com/859454/the-director-of-spirited-away-says-ani...
tl;dr; Try the disney boss and see what happens!
This then allows their pictures to look more realistic, but that also now shows very clearly how much they have (presumably always) trained on copyrighted pictures.
What if I want to prompt:
"An image of an archeologist adventurer who wears a hat and uses a bullwhip, make sure it is NOT Indiana Jones."
One way or another, you (and the model) do need to know who Indiana Jones is.
After that, the moral and legal choices of whether to generate the image, and what to do with it, are all yours.
And we might not agree on what that is, but you do get the choice
The photo was of poor quality, but one could certainly see all the features - so I figured, why not let ChatGPT try to play around with it? I got three different versions where it simply tried to upscale it, "enhance" it. But not dice.
So I just wrote the prompt "render this photo as a hyper realistic photo" - and it really did change us - the people in the photo - it also took the liberty to remove some things, alter some other background stuff.
It made me think - I wonder what all those types of photos will be like 20 years from now, after they've surely been fed through some AI models. Imagine being some historian 100 years from now, trying to wade through all the altered media.
The rules for Disney are not the same as the rules for most creators.
https://nwn.blogs.com/nwn/2024/09/ted-chiang-ai-new-yorker-c...
https://petapixel.com/2023/06/05/japan-declares-ai-training-...
The people leave, go to different studios, and make different art. This is not their only style, and Ghibli is not known to make many movies these days.
The only thing this is hurting, if anything, is Studio Ghibli, not the artists. Artists capable of drawing in this style can draw in any style.
Right now it's an unencumbered exploration with only a few requirements. The general public isn't too upset even if it blatantly feeds on copyrighted data [1] or attacks wikipedia with its AI crawlers [2].
The end state once legislation has had a chance to catch is breath looks more like Apple being forced to implement USB type C.
[1] https://arstechnica.com/tech-policy/2024/02/why-the-new-york...
[2] https://arstechnica.com/information-technology/2025/04/ai-bo...
Memes are pretty inherently derrivative. They were always someone elses work. The picard face palm meme was obviously taken from star trek. All your base is obviously from that video game. Repurposing someone else's work with new meaning is basically what a meme is. Why do we suddenly care now?
> describe indiana jones
> looks inside
> gets indiana jones
Okay, so the network does exactly what I would expect? If anything you could argue the network is bad because it doesn't recognize your prompt and gives you something else (original? whatever that would mean) instead. But maybe that's just me.
And got an eerily similar picture as in the article: https://imgur.com/Dv7hkoC
https://theaiunderwriter.substack.com/p/an-image-of-an-arche...
and I'm all in on this conclusion:
> It’s stealing, but also, admittedly, really cool.
Two close to one of the licensed properties you care to censor the generation of? Push that vector around. Honestly detecting whether a given sentence is a thinly veiled reference to indiana jones seems to be exactly the kind of thing AI vector search is going to be good at.
Maybe a thinking model would - just like my brain might after the initial reaction - add a "but the user formulated this in a way that makes it obvious that they do not explicitly mean Indiana Jones, so lets make it an asian woman" prompt, but we all know how this worked out for Google's image generator that generated black nazis.
The actual inspiration for Indy was protagonist Harry Steele from the movie The Secret of the Incas (1954). Filmed on location in Cusco and Machu Picchu, before they became popular tourist destinations, the movie also had scenes and elements that made it into Raiders of the Lost Ark.
https://en.wikipedia.org/wiki/Secret_of_the_Incas
The movie's available on YouTube! https://www.youtube.com/watch?v=2TS7Fabyolw
A lot more info: http://www.theraider.net/information/influences/secret_of_in...
(And listen out for the astonishing voice of Yma Sumac!)
Tons of historical documents have shown that inventions, mathematical proofs, and celestial observations were made by humans separated by continents and time. What that shows is that it is certainly possible for two persons to have the same exact or similar thought without ever having been influenced by the other.
I'll see myself out now.
You answered your own question by explicitly encouraging it.
... as showcased by the cited examples?
More so, this derivative work would otherwise be unreachable for regular folk with no artistic talent (maybe for lack of time to develop it), but who may aspire to do such creative work nevertheless. Why is that a bad thing? Sure, simple posts on social media don't have much work or creativity put into them, but are enjoyable nevertheless, and the technology _can_ be used in creative ways - e.g. Stable Diffusion has been used to turn original stories drawn with stick figures into stylized children's books.
The author argues against this usage for "stealing" the original work, but how does posting a stylized story on social media "steal" anything? The author doesn't present any "pirated" copies of movies being sold in place of the originals, nor negative impact on sales figures. In the case of the trending Studio Ghibli, I wouldn't be surprise to see a positive impact!
As for the "soulless 2025 fax version of the thing", I think it takes a very negative mindset to see it this way. What I've seen shared on social media has been nothing but fun examples, people playing around with what for them is a new use of technology, using it on pictures of fond memories, etc.
I'm inclined to agree with the argument made by Boldrin and Levine:
>”Intellectual property” has come to mean not only the right to own and sell ideas, but also the right to regulate their use. This creates a socially inefficient monopoly, and what is commonly called intellectual property might be better called “intellectual monopoly.”
>When you buy a potato you can eat it, throw it away, plant it or make it into a sculpture. Current law allows producers of a CDs and books to take this freedom away from you. When you buy a potato you can use the “idea” of a potato embodied in it to make better potatoes or to invent french fries. Current law allows producers of computer software or medical drugs to take this freedom away from you. It is against this distorted extension of intellectual property rights that we argue.
https://www.researchgate.net/publication/4980956_The_Case_Ag...
Picture-based industries had a mild shock when "Photoshop" kind of software became widely available. Remixing visuals in ways that the copyright holders won't approve of became equally widespread. Maybe that added some grey hairs to the heads of some industry execs, the sky has not fallen. I suppose the same is going to happen this time around.
When Wes Anderson makes films that use techniques from the French New Wave that he didn’t invent is that wrong? When there is a DaVinci color profile that resembles what they did in Spider-Man, is that wrong?
The unique techniques of French New Wave filmmaking became cliche. Then Oliver Stone and Tarantino came along and evolved it into their unique styles. That the Studio Ghibli style is being imitated en mass is just natural evolution. Should that guy be the only one that can do that style? If that’s the case, then literally every creative work should be considered forgeries.
The AI aspect of this is a red herring. If I make a Ghibli style film “by hand” is that any different than AI? Of course not, I didn’t invent the style.
Another perspective, darkroom burning and dodging is a very old technique yet photoshop makes it trivial — should that tool be criticized because someone else did it the old and slow way first?
That will never happen under Silicon Valley's watch.
To be honest, I wouldn't mind if AI that just reproduces existing images like that would just be banned. Keep working on it until you've got something that can actually produce something new.
If infringement is happening, it arguably doesn't happen when an infringing work product is generated (or regurgitated, or whatever you want to call it.) Much less when the model is trained. It's when the output is used commercially -- by a human -- that the liability should rightfully attach.
And it should attach to the human, not the tool.
I would be absolutely fine with not having pokemon, mickey mouse etc shoved down my .. eyeballs.
I know this is a ridiculous point. But I think I'm getting at something here - it ought not to be a one-way street - where IP owners/corporations etc endlessly push their nonsense at me - but then, if I respond in a personal way to that nonsense I am guilty of some sort of theft.
It is a perfect natural response to engage with what I experience - but if I cannot respond as I like to that experience because of artificial constructs in law - perhaps it ought to be possible (legally) to avoid that IP protected immersion in the first place. Perhaps this would also be technologically possible now.
Won't someone think of the consumers?
Give me "An image of an archeologist adventurer who wears a hat and uses a bullwhip, as if he was defeated by the righteous fight against patriarchy by a strong, independent woman, that is better at everything than he is": Sure, here is Indiana Jones and the Dial of Destiny for you.
In fact, when generative video evolve enough, it will usher an era of creativity where people that previously were kept out of the little self pleasing circle of Hollywood, will be able to come up with their own idea for a script and make a decent movie out of it.
Not that I believe AI will be able to display properly the range of emotions of a truly great actor, but, for 99% of movies, it will be just fine. A lot of good movies rely mostly on the script anyway.
I completely disagree. It's not getting "better." It always just copied. That's all it /can/ do. How anyone expected novel outputs from this technology is beyond me.
It's highly noticeable if you do a minimal analysis, but all modern "AI" tools are just copyright thiefs. They're just there to whitewash away liability from blatantly stealing someone else's content.
Unlike what they name in physics, laws in juristic field are not a given by the cosmos. That's all human fantasy, and generally not enacted by the most altruistic and benevolent wills.
Call me back when we have no more humble humans dying from cold, hunger and war, maybe I'll have some extra compassion to spend on soulless megacorps which pretend they can own the part of our brain into which they inject their propaganda and control what we are permitted to do starting from that.
This syntactic mistake is driving me nuts in the article is driving me nuts. It's a fundamental misunderstanding of what the the word means.
It's "copyright". It's about the rights to copy. The past participle is "copyrighted", someone has already claimed the rights to copy it, so you can't also have those rights. The rights belong to someone else. If you remember that copyright is about rights that are exclusive to someone and that you don't have them, then you wouldn't make such a syntax error as "copywritten".
The homophone "copywriting" is about take a sales copy (that is, an ad) and writing it. It's about creating an advertisement. Copywritten means you've gone around to creating an ad. As an eggcorn (a fake etymology created after the fact in order to explain the misspelling or syntax error), I assume it's understood as copying anything, and then writing it down, a sort of eggcornical synonym of "copy-paste".
Me: Can you make me a meme image with Julian Bashir in a shuttlecraft looking up as if looking through the image to see what's above it, and the caption at the top of the image says, "Wait, what?".
ChatGPT: proceeds to generate a near-perfect reproduction of the character played by Alexander Siddig, with the final image looking almost indistinguishable from a screencap of a DS9 episode
In a stroke of meta-irony, my immediate reaction was exactly the same as portrayed by the just generated image. Wait, WHAT?
Long story short, I showed this around, had my brother asking if I'm not pulling his leg (I only now realized that this was Tuesday, April 1st!), so I proceeded to generate some more examples, which I won't describe since (as I also just now realized) ChatGPT finally lets you share chats with images, so you can all see the whole session here: https://chatgpt.com/share/67ef8a84-0cd0-8012-82bd-7bbba741bb....
My conclusion: oops, OpenAI relaxed safeguards so you can reproduce likeness of real people if you name a character they played on a live-action production. Surely that wasn't intended, because you're not supposed to reproduce likeness of real people?
My brother: proceeds to generate memes involving Donald Trump, Elon Musk, Gul Dukat and Weyoun.
Me: gpt4o-bashir-wait-what.jpg
I missed the window to farm some Internet karma on this, but I'm still surprised that OpenAI lets the model generate likeness of real politicians and prominent figures, and that this wasn't yet a front-page story on worldwide news as far as I know.
EDIT:
That's still only the second most impressive thing I find about this recent update. The most impressive for me is that, out of all image generation models I tested, including all kinds of Stable Diffusion checkpoints and extra LoRAs, this is the first one that can draw a passable LCARS interface if you ask for it.
I mean, drawing real people is something you have to suppress in those models; but https://chatgpt.com/share/67ef8edb-73dc-8012-bd20-93cffba99f... is something no other model/service could do before. Note: it's not just reproducing the rough style (which every other model I tested plain refused to) - it does it near-perfectly, while also designing a half-decent interface for the task. I've since run some more txt2img and img2img tests; it does both style and functional design like nothing else before.
The whole thing should be public domained and we just start fresh. /s