User Tools

Site Tools


pdc:todo

THINGS TO DO

RPi Revision B Cluster Buildout
  1. Confirm working RPi units and Master/Worker images
  2. Follow tutorial instructions; burning master/worker images already done so skip that part
  3. Update the IP addresses on worker images as needed
  4. Done!
Create an SD Card Chart
  1. Link to SD Card Chart
  2. One row for each card in Card Case
  3. Add information in each column for HOSTNAME (raspberrypi, work001, etc.), CARD SIZE (in GB), CARD CODE (label on card), LINUX(Rasbian) VERSION (from uname -a), EXTRAS (installed software beyond MPI), RPI3 (works on RPi Model 3, yes/no).
  4. Done
RPi 3 Cluster Buildout with NFS Disk Drive
  1. Follow the updated instructions in next two steps to set up RPi 3 nodes. Master will be node connected to the hard drive (disconnect hard drive until instructed to connect it up).
  2. Done…
  3. …but why is the powered USB hub needed?
System Software installed
  1. I'm curious to know if the programs lstopo and lscpu are available on RPi systems once MPI is installed
  2. For comparison, run the two commands on a lab computer and see how the output compares to output on RPi (assuming the commands exist on the Pis)
  3. Partly Done: lscpu is available; lstopo source code is found somewhere deep in the ~/mpich-install directory tree but it was not built/installed along with other MPI tools
Automating the Mount using a Script

Starting the NFS service and mounting the hard drive file system must be done every time the cluster is rebooted or powered up. We can partially automate this by creating a shell script containing the required commands (3 to run on the master, 2 to run on the worker nodes), and running the script when logging in to each node.

  1. Learn about writing shell scripts by working through the tutorial at LinuxCommand.org. The tutorial has 15 parts, but for what we need only the first few are necessary. Try parts 1 through 8. Note: it might be worthwhile to run through the tutorial Learning the Shell first.
  2. With this knowledge, write shell scripts that runs the NFS mount commands on the master and worker nodes.
  3. BONUS: see if you can figure out how to run these shell scripts automatically when the RPi systems boot up (ideally), or when you login to each node (by calling them from something like .bashrc).
Performance Limits and Profiling

This is worth a read: Performance Limits and Profiling

Pandemic Exemplar

Let's give the cluster some real work to do! It may be a little too ambitious to jump directly to a project of this size, but after several weeks of hardware configuration and system adminstration tasks, it might be fun to tackle a big software project. If we get stuck, we can always back off and do some smaller, simpler programs and build back up to this.

  1. Work through the Pandemic Exemplar project. The project comes with starter code and step-by-step instructions in how to build it.
  2. Once it works on one node, make sure it works on all 3 by running it with 3 processes using the flag -n 3. Then try -n 6 and see if it takes less time (see the question about using the wtime function).
  3. Run several experiments. The tutorial suggests two larger tests simulating populations of 70000. Run that first with one node, then 3 nodes and 6 nodes. Make a note (use a table) of the execution times for each run noting the number of processes used. As described in the MPI Tutorial, you can also set up a machine_file that tells the system to use some or all the processor cores on each node of the cluster, so we should get some speedup by using -n 12 (I think I already did, look at the various files in ~/mpi_test on the master node).
  4. There is a section on extending the project to use OpenMP to speed up per-node computations. We'll need to break here and learn how to test if the Raspbian GCC system supports OpenMP, and if not, how to install it.

Commentary

I'll post dated comments and further instructions here, latest first. Feel free to post here using normal font.

pdc/todo.txt · Last modified: 2021/01/13 23:40 by scarl