Cornell University: Building Sparse Linear Algebra Accelerators with HLS

By Mathilde Karsenti

Sparse linear algebra (SLA) operations are essential in many applications such as data analytics, graph processing, machine learning, and scientific computing. However, it is challenging to build efficient hardware accelerators for SLA operations since they typically exhibit low operational intensity and irregular compute and data access patterns. In particular, some of these challenges are not well studied in the context of High-Level Synthesis (HLS). 
In this webinar, Yixiao Du of Cornell University, will first introduce HiSparse, an accelerator on sparse-matrix dense-vector multiplication (SpMV). To achieve a high bandwidth utilization, they co-design the sparse storage format and the accelerator architecture. They further demonstrate the use of Catapult HLS to build a high-throughput pipeline that can handle irregular data dependencies and access patterns. Building on their SpMV accelerator, they further develop a versatile sparse accelerator that can support multiple SLA operations with run-time configurability to support different compute patterns. Their architecture design is guided by a novel analytical model which enables rapid exploration of the design configuration search space. According to their evaluation using both HLS implementation and simulation, their sparse accelerators deliver promising speedup with increased bandwidth and energy efficiency when compared to CPU and GPU executions.

Register Now

Date: Tuesday, November 08, 2022

Time: 10:00 AM Pacific Standard Time

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at