Lectures
Course syllabus:
- Introduction to parallel computing
- In-class exercise #1 - Look around and make a choice of your favorite HPS Application. Post your findings at https://piazza.com/ut.ee/spring2019/mtat08020
- Petascale computing examples, Instruction Level Parallelism (ILP)
- In-class exercise #2 - Find a few Exascale Computing Applications; post your findings at https://piazza.com/ut.ee/spring2019/mtat08020
- ILP concluded; Memory and Cache effects;
- In-class exercise #3 - Choose and read one of the two articles
- Criticality Aware Tiered Cache Hierarchy: A Fundamental Relook at Multi-Level Cache Hierarchies
- Exploring the Performance Benefit of Hybrid Memory System on HPC Environments
- In-class exercise #3 - Choose and read one of the two articles
- MPI (Message Passing Interface) & mpi4py
- Lecture slides - MPI_and_mpi4py.pdf
- In-class exercise #4:
- Read the article by Jonathan Dursi, "HPC is dying, and MPI is killing it"
- Choose one of the alternative technologies mentioned in the article for further investigation (google search etc.)
- Post a brief review of main points of the chosen technology/platform to the course Piazza https://piazza.com/ut.ee/spring2019/mtat08020 ( -- can be done as a group-work)
- Parallel Computer Architectures; Flynn's taxonomy, Flynn-Johnson classification
- In-class exercise #5:
- Look around in Top500 list and to choose your favorite computer system there to write a short review about the highlights of the system! In particular, address also the architectural aspects we have discussed up to now during the course!
- Post the short review to the course Piazza https://piazza.com/ut.ee/spring2019/mtat08020
- NOTE: there are quite a few interesting articles at the TOP500 site as well. -- If you fancy, you can read those and review one of the articles instead...
- In-class exercise #5:
- Designing Parallel programs; performance metrics and analysis
- In-class exercis #6:
- Search for best parallel programming languages
- Choose one of them (your favorite!) and post a brief review to Course Piazza!
- One possible starting point can be: https://www.slant.co/topics/6024/~programming-languages-for-concurrent-programming
- In-class exercis #6:
- Amdahl's law, Gustafson-Barsis law; Methods for increasing efficiency; Benchmarking
- In-class exercise #7: Post the short review to the course
Piazza
on either A) or B):
- A) Find out at OpenBenchmarking.org?:
- What are currently the most popular benchmarks related to parallel computing?
- Find a benchmark that you like the most and describe it!
- Any other interesting aspect you find fascinating? - B) Find out about top500 vs graph500:
- What is their difference?
- What inspired the creation of graph500?
- How different are these lists?
- Some other interesting aspect you notice when comparing the two benchmarks?
- A) Find out at OpenBenchmarking.org?:
- In-class exercise #7: Post the short review to the course
Piazza
on either A) or B):
- Parallel algorithms for of Linear Equations - an overview; finite element method for numerical solution of partial differential equations
- In-class exercise 8:
- Continuing with the task from In-class exercise 6, which was:
- Search for best parallel programming languages
- Choose one of them (your favorite!) and compose a brief review! - The task is to get some corresponding real examples working using an appropriate Jupyter extension kernel! Please share also the steps (as an instruction) on how to get the extension kernel running!
- Like always, posting your achievements’ results here at Course Piazza!
- Continuing with the task from In-class exercise 6, which was:
- In-class exercise 8:
- Application examples: Domain Decomposition Methods (DDM) to solve large with sparse matrices
- In-class question: Lecture slides --> slide 111 -- Which of the above operations can be implemented with hiding communication behind computations?
- (in the sense that it is possible to do some useful work during some communication is happening in background..)
- In-class question: Lecture slides --> slide 111 -- Which of the above operations can be implemented with hiding communication behind computations?
- DDM continued: Iterative methods
- Guest lecture on parallel implementation of cellular automata
- General Purpose GPU Programming GPGPU slides
- Parallel programming models
- Additional frameworks/applications
- Apache Spark (student presentation) Tek Raj Chhetri: Parallel Computing with Apache Spark