Samsung Electronics today said that it has developed an artificial intelligence (AI) processor-embedded high bandwidth memory (HBM) chip that boasts low energy consumption and enhanced performance.
The new processing-in-memory (PIM) technology will help bring powerful AI computing capabilities inside high-performance memory.
The chip, christened HBM-PIM, doubles the performance of AI systems while reducing power consumption by over 70% compared to conventional HBM2, Samsung said in a statement.
The whole thing will accelerate large-scale processing in data centers, high performance computing (HPC) systems and AI-enabled mobile applications, Samsung added.
HBM-PIM is said to use the same HBM interface as older iterations. In the event, customers will not have to change any hardware and software to apply the chip into their existing systems.
New chip maximizes parallel processing
Giving a background on standard computer architecture, Samsung statement said, the processor and memory are separate and data are exchanged between the two. In such a configuration, latency occurs especially when lots of data are moved.
Sidestepping these issues, Samsung is now installing AI engines into each memory bank, maximizing parallel processing to boost performance.
“The HBM-PIM brings processing power directly to where the data is stored by placing a DRAM-optimized AI engine inside each memory bank — a storage sub-unit — enabling parallel processing and minimizing data movement.”
The chip is currently being tested inside AI accelerators of customers that are expected to be completed within the first half of the year.
An AI accelerator is computer hardware that is designed specifically to handle AI requirements.
Samsung’s paper on the chip will be presented at the virtual International Solid-State Circuits Conference to be held on February 22.