Institute of Computer Science
  1. Courses
  2. 2018/19 spring
  3. Parallel Computing (MTAT.08.020)
ET
Log in

Parallel Computing 2018/19 spring

  • Pealeht
  • Loengud
  • Praktikumid
  • Viited

Lectures

Course syllabus:

Lecture slides

  1. Introduction to parallel computing
    • In-class exercise #1 - Look around and make a choice of your favorite HPS Application. Post your findings at https://piazza.com/ut.ee/spring2019/mtat08020
  2. Petascale computing examples, Instruction Level Parallelism (ILP)
    • In-class exercise #2 - Find a few Exascale Computing Applications; post your findings at https://piazza.com/ut.ee/spring2019/mtat08020
  3. ILP concluded; Memory and Cache effects;
    • In-class exercise #3 - Choose and read one of the two articles
      • Criticality Aware Tiered Cache Hierarchy: A Fundamental Relook at Multi-Level Cache Hierarchies
      • Exploring the Performance Benefit of Hybrid Memory System on HPC Environments
      and post a brief extract of main points of the article to the course Piazza https://piazza.com/ut.ee/spring2019/mtat08020 -- can be done as a group-work
  4. MPI (Message Passing Interface) & mpi4py
    • Lecture slides - MPI_and_mpi4py.pdf
    • In-class exercise #4:
      1. Read the article by Jonathan Dursi, "HPC is dying, and MPI is killing it"
      2. Choose one of the alternative technologies mentioned in the article for further investigation (google search etc.)
      3. Post a brief review of main points of the chosen technology/platform to the course Piazza https://piazza.com/ut.ee/spring2019/mtat08020 ( -- can be done as a group-work)
  5. Parallel Computer Architectures; Flynn's taxonomy, Flynn-Johnson classification
    • In-class exercise #5:
      1. Look around in Top500 list and to choose your favorite computer system there to write a short review about the highlights of the system! In particular, address also the architectural aspects we have discussed up to now during the course!
      2. Post the short review to the course Piazza https://piazza.com/ut.ee/spring2019/mtat08020
      3. NOTE: there are quite a few interesting articles at the TOP500 site as well. -- If you fancy, you can read those and review one of the articles instead...
  6. Designing Parallel programs; performance metrics and analysis
    • In-class exercis #6:
      1. Search for best parallel programming languages
      2. Choose one of them (your favorite!) and post a brief review to Course Piazza!
      3. One possible starting point can be: https://www.slant.co/topics/6024/~programming-languages-for-concurrent-programming
  7. Amdahl's law, Gustafson-Barsis law; Methods for increasing efficiency; Benchmarking
    • In-class exercise #7: Post the short review to the course Piazza on either A) or B):
      • A) Find out at OpenBenchmarking.org?:
        - What are currently the most popular benchmarks related to parallel computing?
        - Find a benchmark that you like the most and describe it!
        - Any other interesting aspect you find fascinating?
      • B) Find out about top500 vs graph500:
        - What is their difference?
        - What inspired the creation of graph500?
        - How different are these lists?
        - Some other interesting aspect you notice when comparing the two benchmarks?
  8. Parallel algorithms for of Linear Equations - an overview; finite element method for numerical solution of partial differential equations
    • In-class exercise 8:
      • Continuing with the task from In-class exercise 6, which was:
        - Search for best parallel programming languages
        - Choose one of them (your favorite!) and compose a brief review!
      • The task is to get some corresponding real examples working using an appropriate Jupyter extension kernel! Please share also the steps (as an instruction) on how to get the extension kernel running!
      • Like always, posting your achievements’ results here at Course Piazza!
  9. Application examples: Domain Decomposition Methods (DDM) to solve large with sparse matrices
    • In-class question: Lecture slides --> slide 111 -- Which of the above operations can be implemented with hiding communication behind computations?
      • (in the sense that it is possible to do some useful work during some communication is happening in background..)
  10. DDM continued: Iterative methods
  11. Guest lecture on parallel implementation of cellular automata
  12. General Purpose GPU Programming GPGPU slides
  13. Parallel programming models
  14. Additional frameworks/applications
    • Apache Spark (student presentation) Tek Raj Chhetri: Parallel Computing with Apache Spark
      • Databricks Notebook file
  • Institute of Computer Science
  • Faculty of Science and Technology
  • University of Tartu
In case of technical problems or questions write to:

Contact the course organizers with the organizational and course content questions.
The proprietary copyrights of educational materials belong to the University of Tartu. The use of educational materials is permitted for the purposes and under the conditions provided for in the copyright law for the free use of a work. When using educational materials, the user is obligated to give credit to the author of the educational materials.
The use of educational materials for other purposes is allowed only with the prior written consent of the University of Tartu.
Terms of use for the Courses environment