I'm currently working on further speed improvements — it's already around 8× faster in some cases, but there’s still potential for more optimization.
Since this is an open-source project, community support is very important. I believe AI shouldn’t be controlled or driven by only a few companies, so contributions, feedback, and ideas are always very welcome. Feel free to open an issue or PR if you'd like to help.
The architecture page explains how ternary quantization, dynamic sparsity, and mmap layer streaming work together to push models far beyond normal RAM limits.
Happy to answer questions about the implementation or benchmarks.
Maybe the author could get a large param model to help him get this done though.