headerSearch form

Changing the World through Creative Research

Computing Platform

We are focusing new system architectures, low-precision neural network algorithms, neural processor compilers, heterogeneous multicore software, and distributed learning frameworks to improve power efficient and scalable architecture. We are researching security technologies from the whole system perspective, from hardware to software.

Computing Platform

A new computing processor that mimics the human brain

Neural Processor AI’s rapid evolution is generating an explosion in new chip architectures – especially hardware accelerators for machine learning and deep learning. GPUs have been the workhorse for training deep learning models on servers. Today, more users demand access to AI applications at the edge and on-device. With Moore’s Law slowing, dedicated accelerators to reduce rack area and power consumption are driving hardware innovations.

AI-dedicated hardware, like neural processors, can be 30 to 80 times more energy efficient than GPUs. Neural processors are replacing traditional chips at every turn – cloud-to-edge, hyperconverged servers, and cloud-storage instances. Cloud providers are developing custom chips. Mobile AP companies are enabling high-quality AI applications on device by providing neural processors for inferencing.

SAIT is developing a neural processor targeting world’s best performance in energy and area efficiency. We plan to commercialize this new chip through close cooperation with Samsung business units. To this end, our research in new system architectures, low-precision neural network algorithms, neural processor compilers, heterogeneous multicore software, and distributed learning frameworks will help bring about a new era of AI chips.

Neuromorphic Processor With AI, machine learning, and blockchain demanding greater computation, we are engaged in fundamental breakthroughs. One such advance is through the development of neuromorphic processors inspired by the human brain. The goal of this approach is to emulate the brain’s dynamic learning capability and power efficiency.

At SAIT, we are actively researching near/in memory computing, asynchronous spiking neural networks, and other concepts to create a brain-like processor. Our interests include brain-inspired learning and inference algorithms, low-power mixed signal computing architectures, and new synaptic memories.

SAIT collaborates with the Business Units to develop and verify technologies using the latest logic and memory fabrication.

Processing in Memory Deep learning models require huge memory bandwidth to store input data, weights and activations. During training, activations from forward computation must be stored in order to calculate error gradients during backpropagation. As such, data movement is very expensive in terms of bandwidth, energy, and latency.

Since 2010, Moore's law has begun to breakdown due the difficulty in increasing number of transistors on silicon. Traditional von-Neumann architectures consist of processor chips specialized for serial processing and DRAMs optimized for high density memory. The interface between the two devices is a major bottleneck resulting in large power consumption. In addition, this limitation introduces latency and bandwidth constraints. Processing in memory is actively being researched as one of the ways to solve these issues.

Processing in memory architecture integrates memory and processing units together. The performance constraint problem can be solved by enabling computation closer to data. - Near-memory computing can dramatically reduce the data transmission cost between memory and processing unit.
- In-memory computing performs both data storage and computation in the analog domain, which can further reduce the memory read cost compared to Near-memory computing.

SAIT is actively collaborating with Samsung business units to explore future memory architectures.

High Performance Computing A supercomputer performs at or near the highest operational rate for computers. Supercomputers have been used to advance scientific and engineering breakthroughs. By handling very large databases and performing a great amount of computations, they continue to push the limits of operational speed. 

At any given time, there are well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The race to have the fastest, most powerful supercomputer never ends and the competition doesn’t seem to be slowing down anytime soon. With scientists and engineers using supercomputers for important tasks like studying diseases and simulating new materials, we hope that the advances will benefit all of humanity.

Samsung has its own supercomputing center, which plays an important role in the field of computational science and is applied to a wide range of tasks in various fields. The Samsung Supercomputer has been used to compute novel structures and properties of chemical compounds, polymers, and crystals. It also has served as a key element in accelerating Samsung’s artificial intelligence research in machine learning, deep learning, and data analytics.

Main research topics include new machine learning software, parallel computing, simulation workflow, automation and optimized deployment, novel materials discovery and simulation, along with management of real-time computing grids, etc.

Security & Cryptography We are entering a new era of cybersecurity. Despite having tools like machine learning and artificial intelligence (AI) to automate and guard against cyberattacks, these algorithms can be very brittle based on the data they are trained on. It is certain that attacks will become smarter. In addition, add the layer of connectivity supporting millions, if not billions of devices that enable self-driving cars, medical devices, IoT, etc., you have a huge battlefield of which to defend against attacks. With such a large space to defend, security must be integrated and developed at the design stage when introducing a new technology.

As we have witnessed throughout the history of mankind and with the advent of computers, when new technology emerges, attackers' weapons improve and the security paradigm shifts. For example, RSA and ECC public key crypto systems will no longer be secure with the advent of quantum computers. Blockchain, cloud, and edge computing are also changing the world from closed embedded environments to open environments that are easier to attack, but harder to protect.

SAIT is researching security technologies from the whole system perspective, from hardware (HW) to software (SW):
Next-generation cryptography
- Efficient and intelligent vulnerability discovery on different products  
- Attack and defense mechanisms of HW security systems 
- Automated vulnerability detection in source code and binary code
- Fuzzing to mine for zero day exploits

Software Engineering Software engineering (SE) is “the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software” as defined in the ISO/IEC/IEEE standard. Since the IEEE Computer Society first published its Transactions on SE in 1972, advancement of both technologies and practices in this field continue to evolve. The holistic view of SE can be seen in the SE Body of Knowledge (SWEBOK). Recently SE has evolved along with modern and emerging technologies such as artificial intelligence, smart machines and cyber-physical systems. For instance, verification and validation are critical to ensure the production-readiness of a machine learning (ML) system. However, ML system testing is more complex and challenging than testing manually coded systems in that its behavior depends on data and models.

Among vast and diverse research areas in SE, we are acutely interested in the application of a quantifiable approach to software development. Currently, SAIT is implementing a corporate-wide software measurement system to continuously improve software product quality. Furthermore, we are developing an automated code review system merged into continuous integration system that effectively and efficiently detects quality violations. These activities lead us to research on quantifying technical debt in both code quality and during the development process. At the same time, we are researching software analytics, which provides insights by linking various types of software artifacts for decision-making. Moreover, we want to ensure reliability of the entire software stack around ML systems.