headerSearch form

Device-algorithm Co-optimization for an On-chip Trainable Capacitor-based Synaptic Device with IGZO TFT and Retention-centric Tiki-Taka Algorithm

Journal
Advanced Science
Date
2023.08.09
Abstract

Analog in-memory computing synaptic devices have been widely studied for efficient implementation of deep learning. However, synaptic devices based on resistive memory have difficulties implementing on-chip training due to the lack of means to control the amount of resistance change and large device variations. To overcome these shortcomings, Si-CMOS and capacitor-based charge storage synapses have been proposed, but it is difficult to obtain sufficient retention time due to Si-CMOS leakage currents, resulting in a deterioration of training accuracy. Here, we experimentally show that a novel 6T1C synaptic device using IGZO TFT NMOS’s with low leakage current can provide not only linear and symmetric weight update but also sufficient retention time and parallel on-chip training operations using 5x5 crossbar array. We also developed an efficient yet realistic training algorithm to compensate for any remaining device non-idealities such as drifting references and long-term retention loss, demonstrating the importance of device-algorithm co-optimization.

Reference
Adv. Sci. 2023, 10, 2303018
DOI
http://dx.doi.org/10.1002/advs.202303018