I also wasn't familiar with this terminology:
> You hand it a function; it tries to match it, and you move on.
In decompilation "matching" means you found a function block in the machine code, wrote some C, then confirmed that the C produces the exact same binary machine code once it is compiled.
The author's previous post explains this all in a bunch more detail: https://blog.chrislewis.au/using-coding-agents-to-decompile-...
What LLMs are (still?) not good at is one-shot reverse engineering for understanding by a non-expert. If that's your goal, don't blindly use an LLM. People already know that you getting an LLM to write prose or code is bad, but it's worth remembering that doing this for decompilation is even harder :)
Not what I would have expected from a 'one-shot'. Maybe self-supervised would be a more suitable term?
I stayed away from decompilation and reverse engineering, for legal reasons.
Claude is amazing. It can sometimes get stuck in a reason loop but will break away, reassess, and continue on until it finds its way.
Claude was murdered in a dark instance dungeon when it managed to defeat the dragon but ran out of lamp oil and torches to find its way out. Because of the light system it kept getting “You can’t seem to see anything in the darkness” and randomly walked into a skeleton lair.
Super fun to watch from an observer. Super terrifying that this will replace us at the office.
I hope that others find this similarly useful.
It's good at cleaning up decompiled code, at figuring out what functions do, at uncovering weird assembly tricks and more.
The hardest form of code obfuscation is called homomorphic computing, which is code transformed to act on encrypted data isomorphically to regular code on regular data. The homomorphic code is hard obfuscated by this transformation.
Now create a homomorphic virtual machine, that operates on encrypted code over encrypted data. Very hard to understand.
Now add data encryption/decryption algorithms, both homomorphically encrypted to be run by the virtual machine, to prepare and recover inputs, outputs or effects of any data or event information, for the homomorphic application code. Now that all data within the system is encrypted by means which are hard obfuscated, running on code which is hard obfuscated, the entire system becomes hard^2 (not a formal measure) opaque.
This isn't realistic in practice. Homomorphic implementations of even simple functions are extremely inefficient for the time being. But it is possible, and improvements in efficiency have not been exhausted.
Equivalent but different implementations of homomorphic code can obviously be made. However, given the only credible explanations for design decisions of the new code are, to exactly match the original code, this precludes any "clean room" defenses.
--
Implementing software with neural network models wouldn't stop replication, but would decompile as source that was clearly not developed independent from the original implementation.
Even distilling (training a new model on the "decompiled" model) would be dead giveaway that it was derived directly from the source, not a clean room implementation.
--
I have wondered, if quantum computing wouldn't enable an efficient version of homomorphic computing over classical data.
Just some wild thoughts.
Like, if it ever leaks, or you were planning on releasing it, literally every step you took in your crime is uploaded to the cloud ready to send you to prison.
It's what's stopped me from using hosted LLMs for DMCA-legal RE. All it takes is for a prosecutor/attorney to spin a narrative based on uploaded evidence and your ass is in court.
Have you tried asking them to simply open source the code?