AI is not just another workload, it breaks the assumptions underlying classic compute. Moving data back and forth between memory and ALUs burns power faster than edge devices can supply. This podcast unpacks why low-power AI requires an entirely new silicon paradigm and how reengineering compute around data locality, in-memory operations, and an analog hybrid architecture unlocks intelligence everywhere.
