Home United States USA — software MIT Extended LLVM IR to Enable Better Optimization of Parallel Programs

MIT Extended LLVM IR to Enable Better Optimization of Parallel Programs

851
0
SHARE

NewsHubResearchers at MIT have been working on a fork of LLVM to explore a new approach to optimizing parallel code by embedding fork-join parallelism directly into the compiler’s intermediate representation (IR). This, the researchers maintain , makes it possible to leverage most of the IR-level serial optimizations for parallel programs.
Fork-join parallelism is a way to organize parallel programs that is especially suited to divide-and-conquer kind of algorithms, such as merge sort. Fork-join parallelism is supported in mainstream compilers such as GCC and LLVM through a set of linguistic extensions, for example those provided by OpenMP (e.g., #pragma omp parallel , #pragma omp parallel for and more) and Cilk Plus ( cilk_spawn and cilk_sync ). The compiler front-end handles those linguistic extensions to “lower” parallel constructs to a more primitive representation, which is then translated into IR. For example, the code snippet below uses Cilk cilk_for extension to make it possible to run each iteration of the loop in parallel:
One of the drawbacks of this approach, though, is that the compiler middle-end does not see anymore a for loop, but only opaque runtime calls that abstract the for body in its own function and pass it into a library function that handles the spawning of the loop iterations and later synchronization.

Continue reading...