Syllabus

JNTUK B.Tech High Performance Computing (Elective – I) for R13 Batch.

JNTUK B.Tech High Performance Computing (Elective – I) gives you detail information of High Performance Computing (Elective – I) R13 syllabus It will be help full to understand you complete curriculum of the year.

Course Objectives

  • This course covers the design of advanced modern computing systems. In particular, the design of modern microprocessors, characteristics of the memory hierarchy, and issues involved in multi-threading and multi-processing are discussed. The main objective of this course is to provide students with an understanding and appreciation of the fundamental issues and trade offs involved in the design and evaluation of modern computers

Course Outcomes

  • Understand the concepts and terminology of high performance computing
  • Can write and analyze the behavior of high performance parallel programs for distributed memory architectures (using MPI).
  • Can write and analyze the behavior of high performance parallel programs for shared memory architectures (using Pthreads and OpenMP).
  • Can write simple programs for the GPU.
  • Can independently study, learn about, and present some aspect of high performance computing.

Syllabus

UNIT I: Introduction to Parallel hardware and software, need for high performance systems and Parallel Programming, SISD, SIMD, MISD, MIMD models, Performance issues.

UNIT II: Processors, PThreads, Thread Creation, Passing arguments to Thread function, Simple matrix multiplication using Pthreads, critical sections, mutexes, semaphores, barriers and conditional variables, locks, thread safety, simple programming assignments.

UNIT III: Open MP Programming: introduction, reduction clause, parallel for-loop scheduling, atomic directive, critical sections and locks, private directive, Programming assignments, n body solvers using openMP.

UNIT IV: Introduction to MPI programming: MPI primitives such as MPI_Send, MPI-Recv, MPI_Init, MPI-Finalize, etc., Application of MPI to Trepizoidal rule, Collective Communication primitives in MPI, MPI derived datatypes, Performance evaluation of MPI programs, Parallel sorting algorithms, Tree search solved using MPI, Programming Assignments.

UNIT V: Introduction to GPU computing, Graphics pipelines, GPGPU, Data Parallelism and CUDA C Programming, CUDA Threads Organization, Simple Matrix multiplication using CUDA, CUDA memories.

UNIT VI: Bench Marking and Tools for High Performance Computing Environments, Numerical Linear Algebra Routines BLAS for Parallel Systems evaluation.

Text Books

  • An Introduction to Parallel Programming, Peter S Pacheco, Elsevier, 2011
  • Programming Massively Parallel Processors, Kirk & Hwu, Elsevier, 2012

Reference Books

  • CUDA by example: An introduction to General Purpose GPU Programming, Jason, Sanders, . Edward Kandrit, Perason, 2011
  • CUDA Programming, Shame Cook, Elsevier
  • High Performance Heterogeneous Computing, Jack Dongarra, Alexey & Lastovetsky , Wiley
  • Parallel computing theory and practice, Michel J.Quinn, TMH

For more information about all JNTU updates please stay connected to us on FB and don’t hesitate to ask any questions in the comment.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.