I think neuromorphic hardware is putting the cart before the horse. We should start with neuro evolution experiments that seek to discover effective recurrent spiking topologies.
There are non-linear networks that are so efficient that we wouldn't need specialized hardware to run them. The trade off is that they're incredibly hard to find. I think we might have enough compute on hand now.
Assuming a generalist online learning model exists, we'd only have to find it one time. This isn't like back propagation. Activation = learning when techniques like spike timing dependent plasticity are used.
I wouldn't be surprised if it is closer to 10 Watts vs 100 Megawatts. We are 10 million times more efficient.
This is yet another call-to-action, this time under "AI consumes too much energy" sauce. I've seen those for more than two decades, and it nothing ever came out of this.
A special mention for this paragraph:
> The programmability challenge is perhaps the most significant. The von Neumann architecture comes with 80 years of software development, debugging tools, programming languages, libraries, and frameworks. Every computer science student learns to program von Neumann machines. Neuromorphic chips and in-memory computing architectures lack this mature ecosystem.
This is total B.S, especially with application to AI - there is no need for "ecosystem" of millions of software libraries, there is a handful of algorithms that you need to run and that's it, the thing can earn money. And of course plenty of people work with FPGA's or custom logic which has nothing to do with von Neumann machines - and they get things done. If you have a new technology and you cannot build even a few sample apps on it... don't blame establishment, it just means that your technology does not work.