Review of a paper about scalable things, MPI, and granularity

Paper:

Pjotr Prins, Dominique Belhachemi, Steffen Möller and Geert Smant
Scalable Computing for Evolutionary Genomics
Methods in Molecular Biology, 2012, Volume 856, Part 5, 529-545,
DOI: 10.1007/978-1-61779-585-5_22



Highlights:

p. 531

"Typically, parallel programming implies complicated data and control flow; causes dead locks, where depending threads wait for each other forever; and causes race conditions, where threads go into eternal loops."


This is simply not true with 'message passing'. With message passing, the source code contain no lock at all. Usually, you only need 3 functions to pass messages: send(), probe(), and receive().

  • send() => sends a message to a destination
  • probe() => probes a message from a source, if any
  • receive() => receives a message

Also, race conditions are not about eternal loops at all. A race can usually occur when a given bug is triggered depending on concurrent completion of at least two parallel events.

p. 531

"in a “scatter and gather” pattern"


"scatter and gather" is a collective operation in MPI and therefore is not scalable by definition !


p. 531

"coarse-grained parallelization"

Fine-grained code is just better in the end for everybody.



p. 532

"Actors are an abstraction of parallelization and make reasoning about fine-grained parallelization easier and therefore less error prone"


Indeed, with the actor model and fine-grained functions, you can stack communication and computation inside a single thread in each actor process. Each actor has its own private stuff.


p. 532

"Actors, however, are much faster, more intuitive, and, therefore, probably, safer than MPI."


Utter garbage, MPI is safe and faster than light. The program using it may not be though. You can use MPI to somehow simulate some actor concepts too.


p. 533

"Unlike OpenMP, MPI and Actors, jobs are run independently as separate processes."


This is not true. You do have a single process with many threads with OpenMP. This is bad because all the threads work inside the same virtual memory -- this is prone to programming errors.

But a MPI job is just a set of separate processes passing messages. Each process on a modern operating system has its own virtual memory and therefore a lot of bugs are weeded out, thanks to memory page protection.


p. 534

"One of the strengths of Beowulf is that it comes with full MPI support."


Indeed, MPI is cool and ultimately message passing is the best approach for scalable computing.


p. 535

"On-Demand Scalability with BioNode"


This should be "on-demand computing" as BioNode is just a Linux distribution, thus it does not provide scalability in general.


p. 536

"The experience gained with running BioNode, and examples, can easily be leveraged into existing cluster and HPC setups, as many of these use the same, or similar, tools."


Using a BioNode image is cool because you get a ton of applications installed already, but virtualization is accompanied by a waste of processor cycles too.

But I have to say that this is nice ! It is like using MIT StarCluster on Amazon EC2 -- you get a load of useful tools for HPC with a single click, or something like that.

Basically, this has nothing to do with computation scalability though. The whole point is that shipping images becomes the distribution process itself. Instead of downloading source codes, people download huge operating system images that include tons of apps.

Too much virtualization is bad for liberty/freedom in the context of software usage.

Comments

Popular posts from this blog

A survey of the burgeoning industry of cloud genomics

Generating neural machine instructions for multi-head attention

Adding ZVOL VIRTIO disks to a guest running on a host with the FreeBSD BHYVE hypervisor