Department of Electrical Engineering & Computer Science
Department of Electrical Engineering & Computer Science



AREAS OF INTEREST
 
Ultra-Low Power In-Memory Computing
Over past decades, the amount of data that is required to be processed and analyzed by the computing systems has been increasing dramatically to exascale, which brings grand challenges for state-of-the-art computing system to simultaneously deliver energy efficient and high performance computing solutions. Such challenges mainly come from the well-known power wall (i.e. huge leakage power consumption limits the performance growth when technology scales down) and memory wall (including long memory access latency, limited memory bandwidth, energy hungry data transfer). Therefore, there is a great need to leverage innovations from both circuit design and computing architecture to build an energy efficient and high performance non Von-Neumann computing platform. In-memory computing has been proposed as a promising solution to reduce massive power hungry data traffic between computing and memory units, leading to significant improvement of entire system performance and energy efficiency. Our research focus on:



Deep Learning Neural Network
Deep Neural Network (DNN) is the state-of-the-art neural network computing model that successfully achieves close-to or better than human performance in many large scale cognitive applications, like computer vision, speech recognition, nature language processing, object recognition, etc. The most successful DNN is deep convolutional neural network consisting of multiple types of layers including convolution, activation, pooling and fully-connected layers. Typically, a DNN may have tens to thousands of layers to achieve optimized inference accuracy for practical applications, which makes it heavily memory (tens of GB working memory) and computing intensive (needs powerful CPU, GPU, FPGA, ASIC, etc.). Our research focus on:



Brain Inspired (Neuromorphic) Computing
Human brains are vastly more energy efficient at interpreting the world visually or understanding speech than any CMOS based computer system of the same size. Neuromorphic computing can perform human-like cognitive computing, such as vision, classification, and inference. The fundamental computing units of artificial neural network are the neurons that connect to each other and external stimuli through programmable connections called synapses. The basic operation of an artificial neuron is summing the N weighted inputs and passing the result through a transfer (activation) function. Such neuron and synapse functions can be efficiently implemented using different emerging post-CMOS device technologies. Our research in this area include:



Security of Artificial Intelligence
Deep Neural Networks (DNNs) have achieved great success in various tasks, including but not limited to image classification, speech recognition, machine translation, and autonomous driving. Despite the remarkable progress, recent studies have shown that DNNs are vulnerable to adversarial examples. In image classification, an adversarial example is a carefully crafted image that is visually imperceptible to the original image but can cause DNN model to misclassify. In addition to image classification, attacks to other DNN-related tasks have also been actively investigated, such as visual QA, image captioning, semantic segmentation, machine translation, speech recognition, and medical prediction. Our research in this area include:



Low Power and High Performance Post-CMOS Logic Design
Non-volatile spin-torque switches can be used in designing different logic designs, such as reconfigurable Boolean logic, polymophic logic, stochastic logic, approximate logic designs, which can provide enhanced scalability and energy efficiency resulting from reduced leakage of the spin based devices.


University of Central Florida