×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Program transformations for distributed memory parallelization in Optimizing parallelizing system

    This paper is dedicated to the problem of developing parallelizing compiler for computational systems with distributed memory and approaches of solving that problem. Correseponding parallelizing program transformations are described that are developed using Optimizing parallelizing system. Described transformations automatically detect optimal data placement in distributed memory to minimize inter-node data transfers, detect points inside source program to insert directives for those transfers. Inter-node data transfer optimization is performed using special graph model connecting operators and variables inside source program. Optimizing parallelizing system includes program transformations to generate Message Passing Interface (MPI) code using affine data distribution for arrays with parameters specified using compiler pragmas. Examples of MPI code generated using these distribution methods are listed. This work is based on authors' previous works. Developing compilers for code generation onto distributed memory systems is becoming important for future central processing units with tens, hundreds of thousands of cores.

    Keywords: automatic parallelization, distributed memory, program transformations, data distrubution, message passing