I did something related in the past: https://github.com/RussellSprouts/6502-enumerator. It uses C++ templates to share an emulator implementation between z3-powered symbolic execution and actual execution. It was meant to find equivalence between random instruction sequences for peephole optimization, rather than optimizing a specific input sequence.
Shadow instructions are very interesting and cursed. I've seen them used in very limited circumstances for NOP-slides for timing code: https://www.pagetable.com/?p=669. It would be fun to see it applied in otherwise normal code. My enumerator wouldn't support this -- it didn't execute actual 6502 instructions from bytes -- it had its own internal representation for `the first arbitrary absolute pointer` or `the second arbitrary immediate constant`. These would either be initialized with random concrete values or z3 variables.
But this also raises the question: is there some clever way to use the undocumented SAX instruction, which does an AND between the A and the X registers, to achieve something similar? There is and old trick to compute min and max using no branches, which is not immediately applicable (https://graphics.stanford.edu/~seander/bithacks.html#Integer...) - but maybe there is some other trick hiding in there?
After reading the article, though, I feel like I definitely need a superoptimiser, to see what could be improved :)
I imagine lots of demo coders glancing back and forth between that writeup and their own carefully hand-tuned assembly.
sta $00
asl a
asl a
adc $00
asl a
This is also 7 bytes, but is faster since adc $00 is 3 cycles, vs rra $00 being 5 cycles.
The A = max(A, X) example is certainly interesting, but not very useful since it loops through the code twice (very slow) and assumes that $8a is available. The much faster obvious version only adds one byte:
stx $00
cmp $00
bcs done
txa
done:
The post uses a 4 instruction program as an example having about 256^4 or 4 billion combinations. Most interesting programs are 10, 100, 1000+ instructions long, which is too large of a search space to explore by brute force.
So GAs use a number of tricks to investigate the search space via hill climbing without getting stuck at local optima. They do that by treating the search space as a bit string, then randomly flipping bits (mutation) or swapping bits (sexual reproduction) to hop to related hills in the search space. Then the bit string is converted back to instructions and tested to see if it performs the desired algorithm.
The bit string usually encodes the tree form of a Lisp program to minimize syntax. We can think of it as if every token is encoded in bits (like Huffman encoding inspired by Morse code) For example, the tokens in a (+ 1 2) expression might have the encoding 00, 01 and 10, so the bit string would be 000110, and we can quickly explore all 2^3 = 8 permutations (2^6 = 64 if we naively manipulate an uncompressed bit string whose encoded token sizes vary).
Note that many of the bit strings like (+ + 1) or (2 1 +) don't run. So guard rails can be added to reduce the search space, for example by breaking out early when bit strings throw a compiler exception, or using SAT solvers or caching to weed out nonviable bit strings.
We could build a superoptimizer with GAs, then transpile between MOS 6502 assembly and Lisp (or even run the MOS 6502 assembly directly in a sandbox) and not have to know anything about how the processor works. To me, this is the real beauty of GAs, because they allow us to solve problems without training, at the cost of efficiency.
I don't think that LLMs transpile to Lisp when they're designing algorithms. So it's interesting that they can achieve high complexity and high efficiency via training, without even having verification built-in. Although LLMs trained on trillions of parameters running on teraflops GPUs with GBs of memory may or may not be viewed as "efficient".
I suspect that someday GAs may be incorporated into backpropagation to drastically reduce learning time by finding close approximations to the matrix math of gradient descent. GAs were just starting to be used to pseudorandomly produce the initial weights of neural nets around 2000 when I first learned about them.
Also quantum computing (QC) could perform certain matrix math in a fraction of the time, or even preemptively filter out bit strings which aren't runnable. I suspect that AI will get an efficiency boost around 2030 when QC goes mainstream. Which will probably lead us to a final candidate learning algorithm that explains how quantum uncertainty and emergent behavior allow a physical mind to tune into consciousness and feel self-aware, but I digress.
Because modern compilers don't do any of this, and we aren't accustomed to multicore computing, then from a sheer number of transistors perspective, we're only getting a tiny fraction of the computing power that we might otherwise have if we designed chips from scratch using modern techniques. This is why I often say that computers today run thousands of times slower than they should for their transistor budgets.