Toward Cache-Friendly Hardware Accelerators

Citation:

Yakun Shao, Sam Xi, Viji Srinivasan, Gu Wei, and David Brooks. 2015. “Toward Cache-Friendly Hardware Accelerators.” In HPCA Sensors and Cloud Architectures Workshop (SCAW). Publisher's Version

Abstract:

Increasing demand for power-efficient, high-performance computing has spurred a growing number and diversity of hardware accelerators in mobile Systems on Chip (SoCs) as well as servers and desktops. Despite their energy efficiency, fixed-function accelerators lack programmability, especially compared with general-purpose processors. Today’s accelerators rely on software-managed scratchpad memory and Direct Memory Access (DMA) to provide fixed-latency memory access and data transfer, which leads to significant chip resource and software engineering costs. On the other hand, hardware-managed caches with support for virtual memory and cache coherence are well-known to ease programmability in general-purpose processors, but these features are not commonly supported in today’s fixed-function accelerators. As a first step toward cache-friendly accelerator design, this paper discusses limitations of scratchpad-based memories in today’s accelerators, identifies challenges to support hardware-managed caches, and explores opportunities to ease the cache integration.
Last updated on 04/25/2022