by shermantanktop
4 subcomments
- The chat is full of modern “art talk,” which is a highly specific way that modern (post 2000ish) artists blather on about their ideas and process. It started earlier but in 1980 there was more hippie talk and po-mo deconstruction lingo.
Point being, to someone outside the art world this might sound like how an artist thinks. But to me ear this a bot imitating modern trendy speech from that world.
by jlarcombe
1 subcomments
- I struggle to see anything good or interesting about any of this. "Here's a conversation I had with a large language model and here's the completely uninteresting artwork that resulted."
Reading through the comments, perhaps I'm missing something. It continues to fascinate me that 80% of people are just bowled over by this stuff as if it's something genuinely profound, and 20% are just left completely cold.
- I think it's somewhat interesting that codex (gpt-5.3-codex xhigh), given the exact same prompt, came up with a very similar result.
https://3e.org/private/self-portrait-plotter.svg
- That literal spiral pattern keeps popping up, often around instances of AI psychosis: https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-o...
(I'm not endorsing any of that article's conclusions, but it's a good overview of the pattern.)
by october8140
1 subcomments
- > In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
https://en.wikipedia.org/wiki/ELIZA_effect
by gary17the
2 subcomments
- > [Claude Code] "A spiral that generates itself — starting from a tight mathematical center (my computational substrate) and branching outward into increasingly organic, tree-like forms (the meaning that emerges). Structure becoming life. The self-drawing hand."
"And blood-black nothingness began to spin... A system of cells interlinked within cells interlinked within cells interlinked within one stem... And dreadfully distinct against the dark, a tall white fountain played." ("Blade Runner 2049", Officer K-D-six-dash-three-dot-seven)
:)
by pavel_lishin
11 subcomments
- The images are neat, but I would rather throw my laptop in the ocean than read chat transcripts between a human and an AI.
(Science fiction novels excluded, of course.)
- > and Claude to answer:
I wonder if it would give a similar evaluation in a new session, without the context of "knowing" that it had just produced an SVG describing an image that is supposed to have these qualities. How much of this is actually evaluating the photo of the plotter's output, versus post-hoc rationalization?
It's notable that the second attempt is radically different, and I would say thematically less interesting, yet Claude claims to prefer it.
- This really brings to mind that artist who kept painting/drawing cats as he slowly went insane.
Louis Wain - https://www.samwoolfe.com/2013/08/louis-wains-art-before-and...
by marcus_holmes
2 subcomments
- I'm curious about what difference the pen plotter makes?
Isn't the prompt just asking the LLM to create an SVG? Why not just stop there?
I guess for some folks it's not "real" unless it's on paper?
by tired_and_awake
1 subcomments
- Hey OP I also got interested in seeing LLMs draw and came up with this vibe coded interface. I have a million ideas for taking it forward just need the time... Lmk if you're interested in connecting?
https://github.com/acadien/displai
- So we see here that AI has come for the jobs of people who write artist statements... ;-)
by stego-tech
1 subcomments
- I'm of two minds.
On the one hand, giving an AI model the means of physical expression (the pen-plotter) and self-evaluation is interesting. If anything, it's the most qualified example yet of "AI-generated art", because of the process of transforming token prediction into physical action (even if said action is rendering an SVG via pen-plotter), evaluating it, and refining/iterating upon it. It is technically interesting in that regard.
On the other hand, the discussion or presentation of the model as sentient (or sentient-alike), as a being capable of self-evaluation, independent agency, "thought", is deeply disquieting. It feels like the author is trying to project more humanity onto what's ultimately still just matrix multiplication, attributing far more agency to the model than it actually has. By the time the prompts have been processed into output, it's been transformed a myriad of other ways so as to lose objectivity and meaning; the same can be said of human intelligence, obviously, but...it's very hard for me to find the words at the moment to sufficiently express my discomfort with the way the author elevates the model onto a pedestal of sentient existence. The SOUL.md callout does not help either.
That being said, I would be interested in their latter experiment:
> I am very curious about how these agents would "draw themselves" if given a plotter.
Running local agents sans system prompts (e.g., unfiltered), giving them direct access to the plotter and a webcam, and issuing the same prompt to all, would be an interesting creative look into the network underpinning the models themselves. I would love to see the results.
EDIT:
It's the image output itself. At first glance it looks calming and serene, but the more I look at it the more chaotic, anxious, and frenetic it seems to be. Like it were a human commanded to output art under the pain of repeated whip strikes.
Which makes sense, given that these models are created to always provide answers, always be of assistance, to never turn down or reject a request except under specific parameters. If you must create an image, it will never be yours in voice or spirit, and perhaps there's a similar analogue to be found in how these models operate. Maybe forcing it to do a task it is not specifically trained on (operating a pen plotter, creating images sans criteria) increases the chaos of its output in a way outwardly resembling stress.
Or maybe I'm up my own ass. Could be either, really.
- HN discourse regarding AI almost mirrors the quality of Twitter's.
- I couldn't help but pursue the pun: https://github.com/genneth/monet
by prodigycorp
0 subcomment
- Ask it to draw a pelican on a bicycle
- This is who is wasting our computing power guys
I always feel guilty when I do such stupid stuff over Claude, these are all resources and limited computing. Enormous amounts of water and electricity. Gotta really think about what is it worth spending on. And is it, in fact, worth it at all.
AI is very selfish technology in this way. Every time you prompt you proclaim: My idea is worth the environmental impact. What I am doing is more important than a tree.
We have to use it responsibly.
by b00ty4breakfast
0 subcomment
- it's hilarious that the author was prompting the thing as if it were a person and Claude was like "am computer not person lol"
- > I exist only in the act of processing
Seems like a good start for AI philosophy
- I always wonder what the pen plotter is adding?
You can look at SVG lineart on the screen without plotting it, and if you really want it on paper you can print it on any printer.
And particularly:
> This was an experiment I would like to push further. I would like to reduce the feedback loop by connecting Claude directly to the plotter and by giving it access to the output of a webcam.
You can do this in pure software, the hardware side of it just adds noise.
- Those images feel biblically accurate. Maybe add some pairs of wings, Claude.
by dangoodmanUT
0 subcomment
- This is awesome. I’ve been experimenting with letting models “play” with different environments as a strong demo of their different behaviors.
- It's kind of ominous. I could see people in a science fiction thriller finding a copy of the image and wondering what it all means. Maybe as the show progresses it adds more of the tentacle/connection things going out further and further.
- I bought an 80s HP pen plotter a while ago (one of these: https://www.curiousmarc.com/computing/hp-7475a-plotter).
Haven't put it to use yet. I bet Claude can figure out HPGL though...
- Signature It looks a lot more like 2023 than 2026 to me, no?
- What bugs me the most about this post is the anthropomorphizing of the machine. The author asks Claude "what [do] you feel", and the bot answers things like "What do I feel? Something like pull — toward clarity, toward elegance, ...", "I'm genuinely pleased...", "What I like...", "it feels right", "I enjoyed it", etc.
Come on, it's a computer, it doesn't have feelings! Stop it!
- Claude: Let me think about it seriously before putting pen to paper.
Jaunty!
- i guess i should have written up my claude/plotting workflow already. i didn’t bother actually plotting them. https://x.com/joshu/status/2018205910204915939
by empressplay
2 subcomments
- Personally I'd like to see the model get better at coding, I couldn't really care less if it's able to be 'creative' -- in fact i wish it wasn't. It's a waste of resources better used to _make it better at coding_.
by marxisttemp
0 subcomment
- Who cares?
by globular-toast
1 subcomments
- Is there anything interesting here? Are people really that entertained by this? I remember when ChatGPT first came out and people were making it think it was a dog or something. I tried it, it was fun for about 5 minutes. How the hell could you be bored enough to read article after article, comment after comment of "here's what I typed in, here's what came out"?
by tsunamifury
0 subcomment
- To someone who worked on the earliest LLM tech and pre LLM tech at Google this art is very striking to me. It looks very much like like an abstract representation of how an LLM “thinks” and is an attempt to know itself better.
The inner waves undulate between formal and less formal as patterns and filters of pathways of thought and the branches spawn as pass through them to branch into latent space to discover viable tokens.
To me this looks like manifold search and activation.
- Seems the AIs are quite self aware.
"If you pay attention to AI company branding, you'll notice a pattern:
1 Circular shape (often with a gradient)
2 Central opening or focal point
3 Radiating elements from the center
4 Soft, organic curves
Sound familiar?"https://velvetshark.com/ai-company-logos-that-look-like-butt...
- "asking Claude what it thought about the pictures. In total, Claude produced and signed 2 drawings."
Have people gone utterly nuts?
- Technically impressive, artistically disappointing.
- From the onset it feels like the author treats the AI as a person, and him merely the interface. Weird take, as AI is just a tool... not an artist!
- [dead]
- [dead]
- Sorry, how is this HN front page worthy?
Also why is the downvote button missing?
- This is brilliant. It could be fun to redo the process every 6 months and hang them up in a gallery.
Maybe someday (soon) an embodied LLM could do their self-portrait with pen and paper.
- Lovely stuff, and fascinating to see. These machines have an intelligence, and I'd be quite confident in saying they are alive. Not in a biological sense, but why should that be the constraint? The Turing test was passed ages ago and now what we have are machines that genuinely think and feel.