User Tools

Site Tools


cs444lb:progress

PROGRESS

CLUSTER_PROJECT_LB2017FEB221605: SO FAR A LOCAL AREA NETWORK HAS BEEN SUCCESSFULLY CREATED BETWEEN 2 PI 2'S AND 2 PI 3'S. A RASPBERRY PI VERSION 2 MODEL IS ACTING AS THE MASTER NODE IN THIS PARTICULAR CASE. THE MASTER NODE IS LOADED WITH PROPER MPI PROTOCOL AS PER PREVIOUS TUTORIAL AVAILABLE THROUGH THE DANIAL'16 SUMMER PROJECT. RESEARCH LAPTOP WILL SOON BE REPLACED WITH NEWER MODEL WHICH MIGHT CAUSE A DAY OR SO DELAY DUE TO SET-UP AND REGISTRATION. KNOWN ISSUES, DUE TO A LACK OF LOG FILES FROM PREVIOUS '16 SUMMER RESEARCH, THE DIFFERENTIATION IN THE TUTORIAL DUE TO OUR INTEGRATION OF RASPBERRY PI 3'S CAN CAUSE A SERIOUS DELAY CONSIDERING THE TUTORIAL WAS DESIGNED WITHOUT THE PI 3 IN MIND. CURRENTLY NO CONCEPT OF TASK HAS BEEN CONSIDERED TO ASSIGN THE CLUSTER UPON COMPLETION.
CLUSTER_PROJECT_LB2017MAR011641: The goal today was to establish at least 1 ssh connection which could be used in terms of commanding the new 'cluster' to work together to, in this case, estimate the value of pi. The master node was successfully calibrated/set-up for the procedure. At which time worker001 was connected to the LAN and a ssh key, without password was established on the master node. At this time, the two nodes could successfully speak with eachother. The master successfully SSHed into the worker001 node and changed the hostname to worker001. At this point the system was reset, worker001 that is. This brought about a change in the ip address for worker001. The ip address upon restarting is unable to be determined whilst the message 'UP BROADCAST RUNNING MULTICAST. due to illness, team was unable to mend the error at the time.
Cluster Project lb2017mar29_ABSTRACT_DRAFT: Inexpensive computers to be tied together to form a network where they all tackle a problem at once using a technique called parallel programming. This improves the processing time required to solve certain problem sets. This project is focused on networking a number of the credit-card sized Raspberry Pi computers (available for less than $50) to form a so-called beowulf cluster which for some problems dramatically improves the performance over running the same problem on a single computer. This cluster can then be used to perform advanced calculations or data sorting algorithms much more efficiently than a standard system without the added costs of special parallel processing units, which require extensive cooling apparatus.
lb2017apr05 Important notes: /home/pi/mpich/mpich-3.1.1/examples. Riemann sums are how the pi calculations was done in such a way as to be able to be done through parallel slave nodes.
lb2017apr12: part 1 Important notes: the entire issue with UP BROADCAST RUNNING MULTICAST is fixed now. everything is able to be setup properly again. the fix is this: make sure the switch is connected to the internet slot 3 (131 A under window) and that the Pi units are restarted after they are also connected to the switch. this restores the ip broadcasting issue.
lb2017apr12: part 2 The three worker nodes were able to be SSHed into through the master mode. The external hard drive was connected for the system as a whole using the nfs tutorial method found here: http://suzannejmatthews.github.io/2015/06/15/parallella-cluster/. The methodology appears to be applicable across the bounds of operating system differentiation, due to the Pi2 having a separate operating system version from Pi3. To allow the Pi units to work parallel to one another, the worker003's image was cloned after the SSH key was given. This image was placed onto the units for worker002 and worker004 as well. The nodes were renamed after the cloning to match their predetermined name in the cluster. After this procedure was completed, following the tutorial found at address: http://www.suzannejmatthews.com/private/RaspberryPi_cluster.pdf , with the minor exception of the host folder for the purpose of running the cpi test is no longer ~/mpi_test/ but rather ~/mpi_testing/.
lb2017apr17 Using the Riemann sums distributed program for estimating pi, and printing the processing time required over a certain number of nodes, 10 data points were taken for the calculation using 1 node, 2 nodes, 3 nodes, and 4 nodes in total. The resulting times were averaged and examined. Altogether, a significant increase in speed is evident in proportion to the number of nodes available to distribute the calculation across. A significant decrease in processing time is evident in proportion to the number of nodes available to distribute the calculation across (more nodes: less time required to calculate).
cs444lb/progress.txt · Last modified: 2017/04/19 15:27 by brammlm0