Work efficient higher-order vectorisation

Ben Lippmeier*, Manuel M.T. Chakravarty, Gabriele Keller, Roman Leshchinskiy, Simon Peyton Jones

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Existing approaches to higher-order vectorisation, also known as flattening nested data parallelism, do not preserve the asymptotic work complexity of the source program. Straightforward examples, such as sparse matrix-vector multiplication, can suffer a severe blow-up in both time and space, which limits the practicality of this method. We discuss why this problem arises, identify the mis-handling of index space transforms as the root cause, and present a solution using a refined representation of nested arrays. We have implemented this solution in Data Parallel Haskell (DPH) and present benchmarks showing that realistic programs, which used to suffer the blow-up, now have the correct asymptotic work complexity. In some cases, the asymptotic complexity of the vectorised program is even better than the original.

Original languageEnglish
Title of host publicationICFP'12 - Proceedings of the 2012 ACM SIGPLAN International Conference on Functional Programming
Pages259-270
Number of pages12
DOIs
Publication statusPublished - 22 Oct 2012
Externally publishedYes
Event17th ACM SIGPLAN International Conference on Functional Programming, ICFP 2012 - Copenhagen, Denmark
Duration: 9 Sept 201215 Sept 2012

Conference

Conference17th ACM SIGPLAN International Conference on Functional Programming, ICFP 2012
Country/TerritoryDenmark
CityCopenhagen
Period9/09/1215/09/12

Keywords

  • arrays
  • data parallelism
  • haskell

Fingerprint

Dive into the research topics of 'Work efficient higher-order vectorisation'. Together they form a unique fingerprint.

Cite this