headerSearch form

Disturbance-aware on-chip training with mitigation strategies for massively parallel computing in analog deep learning accelerator

Journal
Advanced Science
Date
2025.05.20
Abstract

 On-chip training in analog in-memory computing (AIMC) holds great promise for reducing data latency and enabling user-specific learning. However, analog synaptic devices face significant challenges, particularly during parallel weight updates in crossbar arrays, where non-uniform programming and disturbances often arise. Despite their importance, the disturbances that occur during training are difficult to quantify based on a clear mechanism, and as a result, their impact on training performance remains underexplored. This work precisely identifies and quantifies the disturbance effects in synaptic devices based on oxide semiconductors and capacitors, whose endurance and variation have been validated, but encounter worsening disturbance effects with device scaling. By clarifying the disturbance mechanism, three simple operational schemes are proposed to mitigate these effects, with their efficacy validated through device array measurements. Furthermore, to evaluate learning feasibility in large-scale arrays, real-time disturbance-aware training simulations are conducted by mapping synaptic arrays to convolutional neural networks (CNN) for the CIFAR-10 dataset. A high accuracy of ~93% is achieved even under intensified disturbances, using a cell capacitor size of 50fF, comparable to dynamic random-access memory (DRAM) levels. Combined with the inherent advantages of endurance and variation, this approach offers a practical solution for hardware-based deep learning.

Reference
Adv. Sci. 2025, 2417635
DOI
http://dx.doi.org/10.1002/advs.202417635