User Tools

Site Tools


cs415pdc:advent2017:aj_preston

AJ P on Hive

Parallelism With Haskell

Homework #1

Concurrent computing allows for multiple tasks to be in progress at any given instant, i.e.
concurrently. This does not necessarily mean progress can be made on two tasks simultaneously. In
Parallel computing, multiple tasks cooperate to solve some problem (usually executing simultaneously
on separate cores). In Distributed computing, programs cooperate with each other to solve a problem.
These programs can be running on multiple machines separated by great distance, or a cluster of machines
connected by a network of some kind. Both Parallel and Distributed computing are examples of Concurrent
computing, but a program being concurrent does not inherently imply the use of Parallel or Distributed
programming.

Instruction Level Parallelism works by having multiple functional units simultaneously execute
instructions. This is achieved through pipelining (increasing instruction throughput by interleaving
certain stages of execution) and superscalar (multiple issue) processors, which attempt to “speculate”
the outcome of certain future instruction's executions.

Thread Level Parallelism provides parallelism through the simultaneous execution of multiple “threads” of control. TLP parallelism is “coarser grained” than ILP because threads represent larger program units than individual instructions; however, multithreading can be fine grained, course grained, or “simultaneous.” Further, TLP and ILP are usually used in tandem with one another.

Homework #3 Problems 1 & 2

cs415pdc/advent2017/aj_preston.txt · Last modified: 2017/12/13 19:38 by adrianpreston