ECE: Electrical & Computer Engineering

ECE 5510 Multiprocessor Programming


Spring 2014 textbook list

The Spring 2014 ECE textbook list is available online for students.

Current Prerequisites & Course Offering

For current prerequisites for a particular course, and to view course offerings for a particular semester, see the Virginia Tech Course Timetables.

Return to course list

ECE 5510 Multiprocessor Programming (3C)

Principles and practice of multiprocessor prgramming. Illustration of multiprocessor programming principles through the classical multual exclusion problem, correctness properties of concurrency (e.g., linearizability), shared memory properties (e.g., register constructions), and synchronization primities for implementing concurrent data structures (e.g., consensus protocols). Illustration of multiprocessor programming practice through programming patterns such as spin locks, monitor locks, the work-stealing paradigm, and barriers. Discussion of concurrent data structures (e.g., concurrent linked lists, queues, stacks, hash maps, skiplists) through synchronization patterns ranging from coarse-grained locking to fine-grained locking to lock-free structures, atomic synchronization primities, elimination, and transactional memory.

What is the reason for this course?

The computer industry is undergoing a paradigm shift, as chip manufactures are increasingly investing in, and manufacturing a new generation of multi-processor ships called multicores, as it is becoming increasingly difficult to enhance clock speeds by packing more transistors in the same chip without increasing power consumption and overheating. Consequently, application software performance can no longer be improved by simply relying on increased clock speeds of single processors; software must explicity be written to exploit the hardware parallelism of multiprocessors. Programming multi-processors is fundamentally different from programming single-processors, due to the need to understand how concurrent computations on separate processors coordinate with one another, in contrast with understanding how concurrent computations on the same processor coordinate with one another. This is a complex and intricate problem, and requries new abstractions, algorithms, and programming tools, which are precisely the focus of the course.

Typically offered: Fall. Program Area: Computers.

Prerequisites: Graduate standing and 4534 or 4550.

Department Syllabus Information:

Major Measurable Learning Objectives:
  • Explain the fundamental principles of multiprocessor programming including:
  • concurrency correctness properites such as consistency, linearizability, progress, fairness, and deadlock-freedom;
  • properties of shared memory such as register constructions and atomic snapshots;
  • synchronization primities for implementing concurrent data structures, ranging from simple ones (e.g., atomic registers, consensus protocols, FIFO queues) to powerful universal constructions (e.g., universality of consensus).
  • Write multiprocessor programs using:
  • programming patterns including spin locks and contention, monitor locks and waiting, work-stealing and parallelism, and barriers;
  • concurrent data structures including concurrent linked lists, concurrent queues, concurrent stacks, concurrent hash maps, and concurrent skiplists;
  • synchronization patterns including coarse-grained locking, fine-grained locking, optimistic locking, lazy synchronization, non-blocking synchronization, atomic synchronization primities, elimination, parellel search;

Course Topics
Topic Percentage
Introduction to shared objects and synchronization, and the mutual exclusion problem 5%
Correctness properties of concurrent programs (consistency, linearizability, progress, fairness, deadlock-freedom) 10%
Foundations of shared memory (register constructions, atomic snapshots) 10%
Synchronization operations for concurrent data structures (atomic registers, consensus protocols, FIFO queues, universality of consensus) 15%
Programming patterns: Spin locks and contention, monitor locks and waiting, work-stealing and parallelism, barriers 10%
Concurrent data structures (concurrent linked lists, concurrent queues, concurrent stacks, concurrent hash maps, concurrent skiplists) 20%
Synchronization patterns: coarse-grained locking, fine-grained locking, optimistic locking, lazy synchronization, non-blocking synchronization 15%
Atomic synchronization primities, elimination, parallel search 10%

Return to course list