Before Thompson’s experiment, many researchers tried to evolve circuit behaviors on simulators. The problem was that simulated components are idealized, i.e. they ignore noise, parasitics, temperature drift, leakage paths, cross-talk, etc. Evolved circuits would therefore fail in the real world because the simulation behaved too cleanly.
Thompson instead let evolution operate on a real FPGA device itself, so evolution could take advantage of real-world physics. This was called “intrinsic evolution” (i.e., evolution in the real substrate).
The task was to evolve a circuit that can distinguish between a 1 kHz and 10 kHz square-wave input and output high for one, low for the other.
The final evolved solution:
- Used fewer than 40 logic cells
- Had no recognisable structure, no pattern resembling filters or counters
- Worked only on that exact FPGA and that exact silicon patch.
Most astonishingly:
The circuit depended critically on five logic elements that were not logically connected to the main path.
Removing them should not affect a digital design
- they were not wired to the output
- but in practice the circuit stopped functioning when they were removed.
Thompson determined via experiments that evolution had exploited:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
- Electromagnetic interference from neighbouring cells
In short: the evolved solution used the FPGA as an analog medium, even though engineers normally treat it as a clean digital one.
Evolution had tuned the circuit to the physical quirks of the specific chip. It demonstrated that hardware evolution could produce solutions that humans would never invent.
Standard cell libraries often implement multiplexers using transmission gates (CMOS switches) with inverters to buffer the input and restore the signal drive. This implementation has the advantage of eliminating static hazards (glitches) in the output that can occur with conventional gates.
I would love to know more about this – how much info is publicly available on how Intel used mainframes to design the 386? Did they develop their own software, or use something off-the-shelf? And I'm somewhat surprised they used IBM mainframes, instead of something like a VAX.
Does that schedule include all the revisions they did too? The first few were almost uselessly buggy:
This implementation is sometimes called a "jam latch" (the new value is "jammed" into the inverter loop).
"Showing one's work" would need details that are verifiable and reproducible.