If it was in muscle memory it would be repeatable feat, and it really isn't.
Some work is technically polished and you can see/hear the effort that went into it.
But there's a point where extra effort makes the work less good. Music starts to sound stale and overproduced, art loses some of its directness, writing feels self-conscious.
Whatever the correlation between perceived artistic merit and effort, it's a lot more complex than this article suggests.
There's the saying, "Plan to throw one away," but seems like it varies in practice (for software).
There are even books about patching paintings, like Master Disaster: Five Ways to Rescue Desparate Watercolors.
In architecture, it's understood the people, vehicles, and landscape are not as exact as the building or structure, and books encourage reusing magazine clippings, overhead projectors, and copy machines to generally "be quick" on execution.
Would like to see thoughts on comparing current process with the "Draw 50" series, where most of the skeleton is on paper by the first step, but the last is really the super-detailed, totally refined, owl.
I wrote this fast so there's jargon and bad prose. The title is deliberately dry and bland so I wasn't expecting anyone to click it. Also I slightly changed my mind on some of the claims .. might write up later.
The main reason I like to think of creative work in a more abstract/formal/geometric way (acceptance volume, latency, sampling) is it's easier for me categorize tasks, modalities and domains and know how to design or work around it. It's very much biased by more own experiences making things.
Also, abstract technical concept often come with nice guarantees/properties/utils to build on .. some would say that's their raison d'être.
Re comments: * "this is just diminishing returns" -- ok and this is a framework for why: the non-worsening region collapses, so most micro-edits fail
* "bands record bangers in an hour" –– practice tax was prepaid. The recording session is exploitation/search riding on cached heuristics imo (and it still takes hours of repeated recording/mixing/producing to actually produce a single album track).
* music key example –– yes I should've picked a different one. Main point was that some choices create wider tolerance (arrangement/range/timbre) even if keys are symmetric in equal temperament
> We don't "rehearse" a specific drawing, we solve a novel problem in real-time. There's no cached motor sequence to execute.
When you have been drawing long enough there are a lot of cached motor sequences to execute and modify. A lot of art training is simply filling this cache: spend a few hours every week drawing the human body from different angles, in a year or three you'll be able to make it up from pretty much any angle. Add in another twenty years of doing that and experimenting ways to make your tools do more of the work for you and you can dash off "sketches" that a beginner would consider finished paintings that took days to do.
The picture of the solution space in 3D makes a great point - we see a narrow hill that leads to a global maximum (i.e. a great result) in a solutions search space that otherwise has a very obvious & wide hill that produces "okay" results. Going from the okay & safe results to a great result means taking the risk of going back down the hill of shittier solutions.
He points out that generative AI will tend to produce results that land on that big wide hill. It's the safe hill, and has the most results. This is perhaps where taste (as a proxy of experience) trumps AI.
Interesting to tie this to the finishing stage of any work. I was definitely thinking about software development in that situation. I would argue it's similar to drawing as he mentions in the FAQ - we're solving a novel problem, as we start implementing a solution we might discover it is inappropriate and have to change to a different part of the solution space.
Or more directly, if your argument for why effort scales linearly with perceived quality doesn't discuss how we perceive quality then something is wrong.
A more direct argument would be that it takes roughly an equal amount of effort to halve the distance from a rough work to its ideal. Going from 90% to 99% takes the same as going from 99% to 99.9% but the latter only covers a tenth of the distance. If our perception is more sensitive to the _absolute_ size of the error you get an exponential effort to improve something.
> The act of creation is fractal exploration–exploitation under optimal feedback control. When resolution increases the portion of parameter space that doesn't make the artifact worse (acceptance volume) collapses. Verification latency and rate–distortion combine into a precision tax that scales superlinearly with perceived quality.
Is this just saying that it's ok if doodles aren't good, but the closer you get to the finished work, the better it has to be? If your audience can't understand what the hell you're talking about for simple ideas, you've gone too far.
As creative projects (software, painting...) we finish or satisfactorily achieve relatively few items. And except for the most repetitive of us, these achievements are pretty different. Wide space, chaos, few satisfactory products by comparison. That doesn't bode well for "magic recipe". All the way to "rules of thumb" that we take fun in violating.
So there are two issues in there: We have more taste than skill and so many of our attempts will disappoint us no matter what. And we will obsess on trying to find a magic formula - when it's rather likely that there isn't one. The "space" is large and chaotic and we might want to reassure ourselves instead that serendipity has something to do with it.
Is there then place for rules of thumb? Whatever let's us get to work in the morning, I guess. For me, I do like the recognition of past track record: with a bit of age hindsight, I know I can do it - no need to dispair. That is useful and reassuring. If I just try some more - in ever varying manners, it will happen.
One place for "rules of thumbs" is in them being tropes. We can get some impact on the viewer by violating them. There is a trope of learning the rules so you can violate them. For example a Rule of Thirds - can be fertile grounds for getting at the viewer. The rule doesn't do much for us, and we have no problem violating it, but our less savvy viewers might remember it and get one more whiff of meaning from the violation. And if we are less concerned with our own satisfaction and more interested in sales, we might pay attention to "what's popular these days" and produce some of that. Not all artists are dead set on personal achievement at the cost of sales. A slightly different look at such rules.
Is it their effect on the total number of available choices?
Does picking E minor somehow give you fewer options than C major (I'm not a musician)?
So, there is huge motivation to put in “just a bit more effort”.
And, thus you get Crunch Time in gamedev!
The more refined your technique is, the harder it will be to discern mistakes and aesthetic failures.
Eventually you might come to a point where you can’t improve because you literally don’t see any issues. That might be the high water mark of your ability.
90% of the project takes 90% of the time and the other 10% of the project takes another 90% of the time.
*I associate it with the asinine contemporary "rationalist" movement (LessWrong et al.) but I'm not making any claims the author is associated with this.
It becomes interesting once sentences span multiple lines and you start using little tactical tricks to keep syntax, semantics, and the overall argument coherent while respecting the anagram constraint.
Using an anagram generator is of course a first step, but the landscapes it offers are mostly desert: the vast majority of candidates are nonsense, and those that are grammatical are usually thematically off relative to what you’ve already written. And yet, if the repeated anagram phrase is chosen well, it doesn’t feel that hard to build long, meaningful sentences. Subjectively, the difficulty seems to scale roughly proportionally with the length of the poem, rather than quadratically and beyond.
There’s a nice connection here to Sample Space Reducing (SSR) processes. The act of picking letters from a fixed multiset to form words, and removing them as you go, is a SSR. So is sentence formation itself: each committed word constrains the space of acceptable continuations (morphology, syntax, discourse, etc.).
Understanding scaling through history-dependent processes with collapsing sample space, https://arxiv.org/pdf/1407.2775
> Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample-space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space reducing (SSR) processes necessarily lead to Zipf’s law in the rank distributions of their outcomes.
> We note that SSR processes and nesting are deeply connected to phase-space collapse in statistical physics [21, 30–32], where the number of configurations does not grow exponentially with system size (as in Markovian and ergodic systems), but grows sub-exponentially. Sub-exponential growth can be shown to hold for the phase-space growth of the SSR sequences introduced here. In conclusion we believe that SSR processes provide a new alternative view on the emergence of scaling in many natural, social, and man-made systems.
In my case there are at least two coupled SSRs: (1) the anagrammatic constraint at the line level (letters being consumed), and (2) the layered SSRs of natural language that govern what counts as a well-formed and context-appropriate continuation (from morphology and syntax up through discourse and argumentation). In practice I ended up exploiting this coupling: by reserving or spending strategic words (pronouns, conjunctions, or semantically heavy terms established earlier), I could steer both the unfolding sentence and the remaining letter pool, and explore the anagram space far more effectively than a naive generator.
Very hand-wavy hypothesis: natural language is a complex, multi-layered SSR engine that happens to couple extremely well to other finite SSR constraints. That makes it unusually good at “solving” certain bounded combinatorial puzzles from the inside—up to and including, say, assembling IKEA furniture.
One extra nuance here: in the anagrammatic setting, the coupling between constraints is constitutive rather than merely referential. The same finite multiset of letters simultaneously supports the combinatorial constraint (what strings are formable) and the linguistic constraint (what counts as a syntactically and discursively acceptable move), so every choice is doubly binding. That’s different from cases like following IKEA instructions, where language operates as an external controller that refers to another state space (parts, tools, assembly steps) without sharing its “material” degrees of freedom. This makes the anagram case feel like a toy model where syntax and semantics are not two separate realms but two intertwined SSR layers over one shared substrate—suggesting that what we call “reference” might itself be an emergent pattern in how such nested SSR systems latch onto each other.
We should all take some time to better understand what brought us here to be better prepared for general creative work and uniqueness in the future...
The definition of 'last-mile edits' is very subjective, though. If you're dealing with open systems, it's almost unthinkable to design something and not need to iterate on it until the desired outcome is achieved. In other domains, for example, playing an instrument, your skills need to have been honed previously: there's nothing that will make you sound better (without resorting to editing it electronically).
> In any bounded system under feedback, refinement produces diminishing returns and narrowing tolerance, governed by a superlinear precision cost.
> There isn’t one official name, but what you’ve articulated is essentially a unified formulation of the diminishing-returns / sensitivity-amplification law of creation — a pattern deep enough that it keeps being rediscovered in every domain that pushes against the limits of order.