Publications by Type: Report

2022
Matthew Adiletta, David Brooks, and Gu-Yeon Wei. 11/28/2022. Architectural Implications of Embedding Dimension During GCN on CPU and GPU. Cambridge: Harvard University.Abstract
Graph Neural Networks (GNNs) are a class of neural networks designed to extract information from the graphical structure of data. Graph Convolutional Networks (GCNs) are a widely used type of GNN for  transductive graph learning problems which apply convolution to learn information from graphs. GCN is a challenging algorithm from an architecture perspective due to inherent sparsity, low data reuse, and massive memory capacity requirements. Traditional neural algorithms exploit the high compute capacity of GPUs to achieve high performance for both inference and training. The architectural decision to use a GPU for GCN inference is a question explored in this work. GCN on both CPU and GPU was characterized in order to better understand the implications of graph size, embedding dimension, and sampling on performance. 
Architectural Implications of Embedding Dimension During GCN on CPU and GPU
2002
Pradip Bose, David Brooks, Viji Srinivasan, and Philip Emma. 2002. Power-Performance and Power Swing Characterization in Adaptive Microarchitectures. Technical Paper Archive - Research Reports. IBM Research. Publisher's VersionAbstract
In this paper, we present an analysis of some of the fundamental power-performance tradeoffs in processors that employ adaptive techniques to vary sizes, bandwidths, clock-gating modes and clock frequencies. Initial expectations are set using simple analytical reasoning models. Later, simulation-based data is presented in the context of a simple, low-power super scalar processor prototype (called LPX) that is currently under development as a test vehicle. There are three fundamental issues that we attempt to address in this paper: (a) Does dynamic adaptation - in clocking or microarchitectural resources - help extend the power-performance efficiency range of wider-issue superscalars ? (b) What factors of power and power-density reductions are within practical reach in future adaptive processors ? (c) Does the presence of dynamic adaptation modes cause unacceptably large, worst-case power (or current) swings in affected sub-units ?
Power-Performance and Power Swing Characterization in Adaptive Microarchitectures