AREAS OF INTEREST
Ultra-Low Power In-Memory Computing
Over past decades, the amount of data that is required to be processed and analyzed by the computing systems has been increasing dramatically to exascale, which brings grand challenges for state-of-the-art computing system to simultaneously deliver energy efficient and high performance computing solutions. Such challenges mainly come from the well-known power wall (i.e. huge leakage power consumption limits the performance growth when technology scales down) and memory wall (including long memory access latency, limited memory bandwidth, energy hungry data transfer). Therefore, there is a great need to leverage innovations from both circuit design and computing architecture to build an energy efficient and high performance non Von-Neumann computing platform. In-memory computing has been proposed as a promising solution to reduce massive power hungry data traffic between computing and memory units, leading to significant improvement of entire system performance and energy efficiency. Our research focus on:
Deep Learning Neural Network
- Explore various in-memory logic circuit design based on existing memory technologies, including SRAM, DRAM, Magnetic (Spintronic) Memory, Resistive RAM, with low over head, efficient operation, low latency, etc.
- Explore dual-mode in-memory computing architecture designs that could simultaneously work as memory and in-memory computing units to greatly reduce data communication, to fully leverage the highly parallel computing ability of processing-in-memory architecture, and thus improving system performance
- Explore suitable in-memory computing applications that could be either fully implemented or pre-processed in the proposed in-memory computing platform, including deep neural network, data encryption, image processing, graph processing, bioinformatics, etc.
- Related publications : [J24:JETC'18], [J22:TMSCS'18], [J21:TMAG'18], [J18:TCAD'17], [C42:ASPDAC'19], [C41:ASPDAC'19], [C40:ICCD'18], [C39:ICCAD'18], [C36:ISVLSI'18], [C34:DAC'18], [C33:DAC'18], [C31:ASPDAC'18], [C30:ASPDAC'18], [C29:ICCD'17], [C28:ICCD'17], [C27:NCAMA'17], [C25:ISLPED'17], [C24:NANOARCH'17], [C23:ISVLSI'17], [C22:ISVLSI'17], [C21:ISVLSI'17], [C20:MWSCAS'17], [C17:GLSVLSI'17], [C12:NANOARCH'16]
Deep Neural Network (DNN) is the state-of-the-art neural network computing model that successfully achieves close-to or better than human performance in many large scale cognitive applications, like computer vision, speech recognition, nature language processing, object recognition, etc. The most successful DNN is deep convolutional neural network consisting of multiple types of layers including convolution, activation, pooling and fully-connected layers. Typically, a DNN may have tens to thousands of layers to achieve optimized inference accuracy for practical applications, which makes it heavily memory (tens of GB working memory) and computing intensive (needs powerful CPU, GPU, FPGA, ASIC, etc.). Our research focus on:
Brain Inspired (Neuromorphic) Computing
- Explore automated and general methodologies to simultaneously reduce DNN model size and computing complexity, while maintaining state-of-the-art accuracy
- Explore how to efficiently implement the compressed DNN model in low power and resource limited mobile system, embedded system, IoT, edge devices for various applications, such as pattern recognition, object tracking/detection, etc.
- Related publications : [C43:WACV'19], [C40:ICCD'18], [C39:ICCAD'18], [C38:ISLPED'18], [C37:ISVLSI'18]( Best Paper Award), [C34:DAC'18], [C32:WACV'17], [Statistical Ternarization]
Human brains are vastly more energy efficient at interpreting the world visually or understanding speech than any CMOS based computer system of the same size. Neuromorphic computing can perform human-like cognitive computing, such as vision, classification, and inference. The fundamental computing units of artificial neural network are the neurons that connect to each other and external stimuli through programmable connections called synapses. The basic operation of an artificial neuron is summing the N weighted inputs and passing the result through a transfer (activation) function. Such neuron and synapse functions can be efficiently implemented using different emerging post-CMOS device technologies. Our research in this area include:
Security of Artificial Intelligence
- Physical modeling of nanoscale emerging devices for potential neuron or synapse applications, such as spin-transfer torque devices, domain wall motion devices, magnetic skyrmion and memristors.
- Exploration of various neuromorphic computing models, such as Deep Learning Convolutional Neural Network, Spiking Neural Network, Hierarchical Temporal Memory, Oscillatory Neural Network, etc
- Cross-layer (device/ circuit/ architecture) co-design for implementing complex machine learning tasks, such as pattern/ speech recognition, semantic reasoning, robotic control and motion detection
- Related publications : [J19:TMSCS'17], [J17:MAGL'17], [J15:JETC'17], [J11:TCAD'16], [J10:TED'16], [J9:TNANO'16], [J8:TNNLS'16], [J7:TNANO'15], [J6:TMAG'15], [J5:TMAG'15], [J4:JETCAS'15], [J2:TNANO'14], [J1:JAP'13], [C35:GLSVLSI'18], [C15:DATE'17], [C10:IJCNN'16], [C9:ASPDAC'16], [C8:ASPDAC'16], [C7:DATE'14], [C6:ISVLSI'14], [C5:DAC'13], [C4:ISLPED'13], [C3:ICCAD'13], [C2:ISQED'13], [C1:E3S'13],
Deep Neural Networks (DNNs) have achieved great success in various tasks, including but not limited to image classification, speech recognition, machine translation, and autonomous driving. Despite the remarkable progress, recent studies have shown that DNNs are vulnerable to adversarial examples. In image classification, an adversarial example is a carefully crafted image that is visually imperceptible to the original image but can cause DNN model to misclassify. In addition to image classification, attacks to other DNN-related tasks have also been actively investigated, such as visual QA, image captioning, semantic segmentation, machine translation, speech recognition, and medical prediction. Our research in this area include:
Low Power and High Performance Post-CMOS Logic Design
Non-volatile spin-torque switches can be used in designing different logic designs, such as reconfigurable Boolean logic, polymophic logic, stochastic logic, approximate logic designs, which can provide enhanced scalability and energy efficiency resulting from reduced leakage of the spin based devices.
- Related publications : [J23:TNANO'18], [J20:TMAG'18]( Front Cover Paper), [J17:MAGL'17], [J16:TCAD'17], [J14:TETC'17], [J13:JETCAS'17], [J12:TNANO'17], [J3:TNANO'14], [C35:GLSVLSI'18], [C26:ICCAD'17], [C21:ISVLSI'17]( Best Paper Award), [C19:ISCAS'17], [C16:GLSVLSI'17], [C14:ISQED'17], [C11:GLSVLSI'16]