Understanding Multi-core Processing

Understanding Multi-Core Processing is a 4-day course that leads the student through many of the topics associated with software development in a parallel processing environment.  The course starts with a brief overview of common hardware architectures, data and thread parallelism, caching and processor affinity and how these concepts are found in operating systems like Linux.  After the overview, the course reviews the instruction-level parallelism (MMX, SSE, NEON, etc. instruction sets) and how to use these instruction sets to increase performance.  The course discusses thread parallelism as is manifested in a 1:1 scheduler where each thread is independently schedulable on any processor core and the pluses and minuses of such an architecture.  Both logical and temporal correctness will be discussed including techniques for avoiding race conditions and deadlocks in multi-core systems.  Linux will be the focus of this class although the concepts are equally applicable to other operating systems such as Cisco’s IOS, Windows and OS/X.

Audience

For those who are new to multi-core or the seasoned professional, the dizzying array of options in multi-core processors and what these cores mean to software development can be daunting.  This course is focused on helping the attendee understand the common CPU architectures and how to write software to best take advantage of increasingly complex multi-core platforms. 

Course Materials

The course materials include a workbook that contains all of the slides presented during the lectures as well as hands-on lab exercises.  The on-site course is taught using Linux-based, multi-core laptops to demonstrate the concepts presented during the class

 

Get more information on this course