Introduction
History
1940-1960s: the first supercomputers
Military-driven evolution
First modern computers developed in WWII (ENIAC - Electronic Numerical Integrator and Computer)

ENIAC in Philadelphia, Pennsylvania. Glen Beck (background) and Betty Snyder (foreground) program the ENIAC in building 328 at the Ballistic Research Laboratory (BRL). Figure taken from Wikipedia.
Cold War: code breaking, intelligence gathering and processing, weapon design
-
Began in 1966; goal was 1 GFLOP/s; estimated cost $8 million
finished in 1972 at a cost of $31 million and a top speed far below 1 GFLOP/s
1975-1990: the Cray era
1990-2010: the cluster era
More vector processors (>8) results in memory contention -> slows down the computation
Solution: distributed memory systems where each processor has its own memory.
first cluster built in 1994 by Becker and Striling at NASA using available PCs and networking hardware (16 Intel 486DX PCs, 10 Mb/s Ethernet, 1 GFLOP/s) - named Beowolf

An example of a Beowolf Cluster. Figure taken from https://memim.com/beowulf-cluster.html.
2010-present: the accelerator (GPU) and hybrid era
Increasing number of processor cores, not processor speeds.
Development of GPUs and other accelerators.
Today’s supercomputers are hybrid clusters mixed of traditional processors and the accelerators.
What is HPC?
High Performance Computing (HPC) refers to the practice of utilizing very large computers (supercomputers, clusters) to perform computationally and memory intensive tasks at high speeds using parallel processing. HPC involves usage of advanced hardware and software to solve extremely complex problem that go beyond the capabilities of conventional computational systems.
What is High-Performance Computing - YouTube (Croatian).

Overview of HPC
The main features of HPC
processing at high-speeds
parallel and distributed programming and architectures
solving big and complex problems
Why HPC?
Personal computers often lack the processing power and memory capacity needed.
High-performance GPUs may not be readily available.
Large volumes of computational tasks need to be processed.
Access to a diverse range of specialized software programs is required.
Seamless access to extensive data repositories and databases is essential.
Many research and engineering problems today are beyond the capabilities of personal computers and even workstations. The US National Science Foundation (NSF) has recognized this and defined the ‘Grand Challenges’, the fundamental problems of science and engineering that require high performance computers to solve:
Advanced New Materials
Prediction of Climate Change
Semiconductor Design and Manufacturing
Drug Design and Development
Energy through Fusion
Water Sustainability
Understanding Biological Systems
New Combustion Systems
Astronomy and Cosmology
Hazard Analysis and Management
Human Science and Policy
Virtual Product Design
Cancer Detection and Therapy
AI and Deep Learning
The main reason for using HPC
To be able to solve bigger and more complex problems!
You can tackle bigger problems in the same amount of time and/or you can solve the same sized problems in less time.
Where to use HPC?
Think of a task that you cannot currently solve on your local computer, but which could be solved on a supercomputer or computer cluster.