CS G280: Parallel Computing
The schedule of final talks is now available. Please register
for a time slot. Please also update your "Description of Projects" with
your final report by Tuesday, Dec. 11 (last class). Thank you.
Course External Links
Instructor: Prof. Gene
Course Time: Tuesdays, 6:00 - 9:00 p.m.
Prerequisites: general sophistication in UNIX programming
NOTE: Please ignore the catalog description for CS G280.
It dates from circa 1990, and has nothing to do with the modern world.
Parallel computing today is dominated by commodity hardware and the
use of standardized protocols and system services. It is related to
distributed computing, with the following important difference:
parallel computing assumes that the CPU is the bottleneck, while
distributed computing assumes that the network (bandwidth and/or
latency) is the bottleneck. The emphasis will be on understanding the
many middleware technologies and adapting them to parallel computing.
The course will include a project requiring use of one or more of the
middleware technologies. For a sampling of projects, see
http://www.ccs.neu.edu/home/gene/projects.html . Additional project
proposals are welcome.
You can also gain an overview of this course by looking at the
from a minicourse that I taught.
Professor G. Cooperman
Office: 336 West Village Hall
Office Hours: Tues. 5:00 - 6:00; and Friday at 12:30; and by appointment.
Textbook: None (Course notes and pointers to the web will be used.)
Exams and Grades:
There will be one mid-term, and a project.
They will be weighted 35\% for the midterm, 55\%
for the project and 10\% for class participation.
The project will be given in several parts, with oral presentations
and written components. The precise schedule for the project will
depend on the number and the interests of the students.
will have an opportunity to choose a project from a range including extending
an implementation of MPI, solving an applied problem using a parallel tool,
and a study of current literature on some parallel algorithms.
TOPIC 1: Brief Introduction to Parallel Computing via TOP-C
TOPIC 2: Hardware Interface:
POSIX Threads (shared memory), TCP/IP Sockets (distributed memory),
and DSM (distributed shared memory): cache coherence, bus snooping,
synchronization, TCP/IP parameters, and other topics
Intermediate models (neither distributed nor shared): COMA, CC-NUMA, DSM
TOPIC 3: Algorithmic Concepts:
parallel prefix, pointer jumping, PRAM and bridging models of parallelism
TOPIC 4: Overview of Middleware for Distributed and Parallel Computing:
Corba, XML/SOAP technologies, the Computational Grid protocols,
MPI (Message Passing Interface), POSIX threads, TCP/IP services,
parallel BLAS (parallel basic linear algebra subroutines),
and other "middleware systems".
TOPIC 5: Programmer's Models of Parallelism:
- Linda (shared tasks),
- Cilk (work-stealing model of parallelism),
- TOP-C (Task Oriented Parallelism),
- OpenMP (Open MultiProcessing: shared memory parallelism),
- HPF (High Performance Fortran: data parallelism)
TOPIC 6: Applications of Parallelism