GITNUXREPORT 2026

Calculating Power Statistics

Computing power has exponentially increased from primitive kiloFLOPS to modern exaflop supercomputers.

How We Build This Report

01
Primary Source Collection

Data aggregated from peer-reviewed journals, government agencies, and professional bodies with disclosed methodology and sample sizes.

02
Editorial Curation

Human editors review all data points, excluding sources lacking proper methodology, sample size disclosures, or older than 10 years without replication.

03
AI-Powered Verification

Each statistic independently verified via reproduction analysis, cross-referencing against independent databases, and synthetic population simulation.

04
Human Cross-Check

Final human editorial review of all AI-verified statistics. Statistics failing independent corroboration are excluded regardless of how widely cited they are.

Statistics that could not be independently verified are excluded regardless of how widely cited they are elsewhere.

Our process →

Key Statistics

Statistic 1

AMD Ryzen Threadripper PRO 5995WX scores 100 GFLOPS peak single CPU FP64

Statistic 2

Intel Core i9-13900K achieves 1.7 TFLOPS FP32 peak with AVX-512

Statistic 3

NVIDIA H100 SXM GPU delivers 67 TFLOPS FP64 Tensor Core performance

Statistic 4

AMD Instinct MI300X GPU reaches 163.4 TFLOPS FP16 peak

Statistic 5

Apple M2 Ultra chip peaks at 31.6 TFLOPS FP32 GPU performance

Statistic 6

Intel Xeon Platinum 8592+ offers 2.9 TFLOPS FP64 per socket peak

Statistic 7

NVIDIA A100 80GB GPU achieves 19.5 TFLOPS FP64 with Tensor Cores

Statistic 8

AMD EPYC 9754 (Genoa) peaks at 3.2 TFLOPS FP64 dual socket

Statistic 9

Qualcomm Snapdragon 8 Gen 2 GPU at 3.2 TFLOPS FP32 peak for mobile

Statistic 10

IBM Power10 processor delivers 5 TFLOPS FP64 per chip

Statistic 11

NVIDIA RTX 4090 GPU reaches 82.6 TFLOPS FP16 peak shader

Statistic 12

AMD Radeon RX 7900 XTX at 61 TFLOPS FP32 peak performance

Statistic 13

Intel Arc A770 GPU delivers 17.2 TFLOPS FP16 peak

Statistic 14

ARM Neoverse V2 core in AWS Graviton3 peaks at 0.4 TFLOPS FP32 per core

Statistic 15

Google Tensor Processing Unit v4 (Trillium) achieves 2.7 petaFLOPS FP16 per pod

Statistic 16

Cerebras Wafer-Scale Engine 2 (WSE-2) delivers 20 petaFLOPS FP16 AI performance

Statistic 17

Graphcore IPU Colossus MK2 GC200 at 350 TOPS INT8 per chip

Statistic 18

SambaNova SN40L chip reaches 2 petaFLOPS FP16 per card

Statistic 19

Tenstorrent Grayskull at 114 TOPS INT8 peak performance

Statistic 20

SiPearl Rhea CPU for HPC peaks at 1.7 TFLOPS FP64 per socket

Statistic 21

Frontier's HPE Slingshot-11 interconnect enables 200 Gb/s per node for calculating power

Statistic 22

NVIDIA DGX H100 system with 8 H100 GPUs reaches 32 petaFLOPS FP8 AI

Statistic 23

AMD MI250X dual-GPU die at 47.9 TFLOPS FP64 peak

Statistic 24

Intel Ponte Vecchio (Max 1550) GPU at 56 TFLOPS FP64 Tensor

Statistic 25

Frontier supercomputer holds the current TOP500 #1 at 1.194 exaFLOPS Rmax as of June 2023

Statistic 26

Aurora supercomputer ranks #2 at 1.012 exaFLOPS Rmax in June 2023 TOP500 list

Statistic 27

Eagle supercomputer at 561.2 petaFLOPS Rmax, #3 on June 2023 TOP500

Statistic 28

Fugaku at 442.0 petaFLOPS Rmax, #4 in June 2023

Statistic 29

LUMI at 381.0 petaFLOPS Rmax, #5 June 2023 TOP500

Statistic 30

Frontier's Rp peak is 1.707 exaFLOPS in June 2023

Statistic 31

El Capitan projected at over 2 exaFLOPS, but current Leonardo at 233.3 petaFLOPS #6 June 2023

Statistic 32

Alps supercomputer at 204.8 petaFLOPS #7 June 2023 TOP500

Statistic 33

MareNostrum 5 at 175.5 petaFLOPS #9 June 2023

Statistic 34

Frontier uses 37,888 AMD Instinct MI250X GPUs contributing to its exaFLOPS performance

Statistic 35

Aurora employs Intel Xeon Max CPUs and Data Center GPU Max for 1.012 exaFLOPS

Statistic 36

Summit at Oak Ridge has 27,648 NVIDIA V100 GPUs for 148.6 petaFLOPS Rmax current rank #13

Statistic 37

Perlmutter at NERSC delivers 64.6 petaFLOPS Rmax with AMD GPUs, rank #20 June 2023

Statistic 38

Frontier consumes 20.99 MW power for 1.194 exaFLOPS, efficiency 56.9 gigaFLOPS/W

Statistic 39

Japan's ABCI-Q at 95.2 petaFLOPS Rmax for quantum simulation, rank #26 June 2023

Statistic 40

China's OceanLite at 1.3 exaFLOPS AI performance but 125.4 petaFLOPS HPL #27

Statistic 41

Microsoft Azure Eagle at 561 petaFLOPS but HPL 561.2 petaFLOPS #3

Statistic 42

Nvidia-powered Isambard-AI at 132.0 petaFLOPS #18 June 2023 TOP500

Statistic 43

HPC6 at 110.4 petaFLOPS Rmax #24 June 2023

Statistic 44

AMD EPYC 7763 CPU in Frontier nodes contributes to overall calculating power

Statistic 45

HPE Cray EX architecture in Frontier enables 9.2 million cores

Statistic 46

Japan's Fugaku with A64FX processors at 442 petaFLOPS sustained

Statistic 47

European LUMI uses AMD MI250X GPUs for 381 petaFLOPS

Statistic 48

Selene at 63.5 petaFLOPS Rmax with NVIDIA A100 GPUs #33 June 2023

Statistic 49

Frontier supercomputer efficiency is 52.72 gigaFLOPS/W Green500 #1 June 2023

Statistic 50

Aurora at 49.03 gigaFLOPS/W #2 on Green500 June 2023

Statistic 51

Eagle achieves 46.18 gigaFLOPS/W efficiency #3 Green500 June 2023

Statistic 52

Alps at 40.60 gigaFLOPS/W #4 Green500, consumes less power per FLOP

Statistic 53

LUMI supercomputer at 38.99 gigaFLOPS/W #5 Green500 June 2023

Statistic 54

NVIDIA H100 GPU efficiency up to 1.98 TFLOPS/W FP64 Tensor

Statistic 55

AMD MI300X at 81.7 TFLOPS/W FP16 efficiency claimed

Statistic 56

Google TPU v5e at 2.5x better efficiency than v4, ~400 TFLOPS/W INT8

Statistic 57

Cerebras CS-3 wafer-scale at 1200 TFLOPS/W sparsity FP16

Statistic 58

Graphcore Bow IPU efficiency 500+ TOPS/W INT8

Statistic 59

Frontier total power draw 21 MW for 1.194 exaFLOPS

Statistic 60

Fugaku consumes 29.9 MW at 442 petaFLOPS, 14.8 gigaFLOPS/W

Statistic 61

Summit power 10.1 MW for 148.6 petaFLOPS, 14.7 gigaFLOPS/W

Statistic 62

Sunway TaihuLight used 15.37 MW for 93 petaFLOPS, 6.05 gigaFLOPS/W historical

Statistic 63

NVIDIA A100 SXM4 400W TDP for 19.5 TFLOPS FP64, ~48.75 GFLOPS/W

Statistic 64

AMD EPYC 9754 400W TDP dual socket ~8 GFLOPS/W FP64

Statistic 65

Intel Xeon 8592+ 350W TDP ~8.3 GFLOPS/W FP64 per socket

Statistic 66

Apple M1 Max 60W for 10.4 TFLOPS FP32, 173 GFLOPS/W GPU

Statistic 67

Qualcomm Snapdragon 8 Gen 2 5nm process 4nm effective, ~640 GFLOPS/W mobile GPU

Statistic 68

IBM Power10 at 20.6 gigaFLOPS/W in TOP500 systems

Statistic 69

SiPearl Rhea 2.0 nm process target 50+ GFLOPS/W FP64

Statistic 70

El Capitan projected 2 exaFLOPS at under 30 MW, ~66 gigaFLOPS/W target

Statistic 71

Moore's Law predicts doubling of transistors every 2 years, implying ~1.86x computing power

Statistic 72

Exascale computing achieved 2022, zettascale targeted by 2030 at 10^21 FLOPS

Statistic 73

Quantum supremacy demonstrated by Google Sycamore at 53 qubits, 200s vs classical 10k years

Statistic 74

IBM roadmap to 100k+ logical qubits by 2033 for fault-tolerant quantum computing

Statistic 75

Landauer limit theoretical minimum energy 2.8 kT ln2 per bit erasure ~3 zJ/op at room temp

Statistic 76

Dennard scaling ended 2006, but 3D stacking to continue power efficiency gains

Statistic 77

Optical computing could reach 10^15 FLOPS/W vs electronic 10^12

Statistic 78

Neuromorphic computing like Intel Loihi 2 at 10^12 ops/W synaptic

Statistic 79

Frontier to El Capitan 2x performance at same power by 2025

Statistic 80

AMD roadmap MI400 series 5x AI performance over MI300 by 2026

Statistic 81

NVIDIA Rubin platform R100 GPU 30x inference perf over Hopper by 2026

Statistic 82

Intel 18A process 1.8nm for Xeon 6 by 2025, 20% perf/W gain

Statistic 83

TSMC A16 1.6nm node 10% speed 15-20% power reduction 2026

Statistic 84

Quantum annealers like D-Wave Advantage 5000+ qubits solve optimization 10^6x faster

Statistic 85

Photonic chips Lightmatter Passage 36 petaFLOPS FP16 at 10 kW

Statistic 86

Global supercomputing capacity to hit 10 exaFLOPS aggregate by 2025

Statistic 87

AI training FLOPS doubling every 6 months, 10x every 2 years per OpenAI

Statistic 88

Bekenstein bound limits info density 10^69 bits/m^3 black hole, theoretical compute limit

Statistic 89

Reversible computing could approach Landauer limit, 10^42 ops/J theoretical

Statistic 90

Planetary computing limit Bremermann's 10^50 FLOPS/kg matter

Statistic 91

Margolus-Levitin theorem 6×10^33 ops/J energy-time limit per op

Statistic 92

ExaEnergy project targets 60 gigaFLOPS/W sustainable by 2030

Statistic 93

Post-Moore photonics-neuromorphic hybrid 1000x efficiency by 2040

Statistic 94

The ENIAC computer, completed in 1945, had a peak performance of approximately 0.0000001 gigaFLOPS (100 kiloFLOPS)

Statistic 95

The Manchester Mark 1, operational in 1949, performed about 1.2 kiloFLOPS in floating-point operations

Statistic 96

The UNIVAC I, delivered in 1951, achieved around 0.000001 gigaFLOPS (1 kiloFLOPS) peak performance

Statistic 97

The IBM 701, introduced in 1953, delivered approximately 0.000016 gigaFLOPS (16 kiloFLOPS)

Statistic 98

The CDC 6600, launched in 1964, reached 3 megaFLOPS peak performance

Statistic 99

The Cray-1 supercomputer, released in 1976, had a peak speed of 160 megaFLOPS

Statistic 100

The Cray X-MP, introduced in 1982, achieved up to 940 megaFLOPS in multi-processor configuration

Statistic 101

The Connection Machine CM-5, deployed in 1991, scaled to 1.056 teraFLOPS with 1024 processors

Statistic 102

ASCI Red, completed in 1997, became the first teraFLOPS supercomputer at 1.338 teraFLOPS

Statistic 103

ASCI White, operational in 2000, peaked at 7.226 teraFLOPS

Statistic 104

Earth Simulator, launched in 2002, achieved 35.86 teraFLOPS on TOP500 list

Statistic 105

Blue Gene/L, reached 280.6 teraFLOPS in 2006

Statistic 106

Roadrunner supercomputer hit 1.026 petaFLOPS in 2008

Statistic 107

Tianhe-1A achieved 2.566 petaFLOPS in 2010

Statistic 108

Fujitsu K computer reached 10.51 petaFLOPS in 2011

Statistic 109

Titan supercomputer delivered 17.59 petaFLOPS in 2013

Statistic 110

Tianhe-2 peaked at 33.86 petaFLOPS in 2014

Statistic 111

Sunway TaihuLight achieved 93.01 petaFLOPS in 2016

Statistic 112

Summit supercomputer reached 122.3 petaFLOPS in 2018

Statistic 113

IBM Power9-based Sierra hit 94.64 petaFLOPS in 2018

Statistic 114

Frontier became the first exaFLOPS machine at 1.102 exaFLOPS in 2022

Statistic 115

The first TOP500 list in June 1993 was topped by TMC CM-5/1024 at 59.7 gigaFLOPS

Statistic 116

Intel Paragon XP/S 140 at 143.4 gigaFLOPS topped November 1993 list

Statistic 117

Numerical Wind Tunnel at 170.0 gigaFLOPS in June 1994

Statistic 118

Intel Paragon at 281.0 gigaFLOPS in November 1996

Statistic 119

ASCI Red at 1.340 teraFLOPS in June 1997

Statistic 120

ASCI Red sustained 1.064 teraFLOPS in November 1997

Statistic 121

ASCI Red at 2.382 teraFLOPS in June 1999

Statistic 122

ASCI Q reached 4.944 teraFLOPS projected in November 2001

Statistic 123

Earth Simulator at 35.860 teraFLOPS in June 2002

Trusted by 500+ publications
Harvard Business ReviewThe GuardianFortune+497
In the span of a single human lifetime, our capacity to calculate has soared from the room-filling ENIAC's hundred-thousand operations per second to the exascale frontier where today's supercomputers perform over a quintillion calculations in that same blink of an eye.

Key Takeaways

  • The ENIAC computer, completed in 1945, had a peak performance of approximately 0.0000001 gigaFLOPS (100 kiloFLOPS)
  • The Manchester Mark 1, operational in 1949, performed about 1.2 kiloFLOPS in floating-point operations
  • The UNIVAC I, delivered in 1951, achieved around 0.000001 gigaFLOPS (1 kiloFLOPS) peak performance
  • Frontier supercomputer holds the current TOP500 #1 at 1.194 exaFLOPS Rmax as of June 2023
  • Aurora supercomputer ranks #2 at 1.012 exaFLOPS Rmax in June 2023 TOP500 list
  • Eagle supercomputer at 561.2 petaFLOPS Rmax, #3 on June 2023 TOP500
  • AMD Ryzen Threadripper PRO 5995WX scores 100 GFLOPS peak single CPU FP64
  • Intel Core i9-13900K achieves 1.7 TFLOPS FP32 peak with AVX-512
  • NVIDIA H100 SXM GPU delivers 67 TFLOPS FP64 Tensor Core performance
  • Frontier supercomputer efficiency is 52.72 gigaFLOPS/W Green500 #1 June 2023
  • Aurora at 49.03 gigaFLOPS/W #2 on Green500 June 2023
  • Eagle achieves 46.18 gigaFLOPS/W efficiency #3 Green500 June 2023
  • Moore's Law predicts doubling of transistors every 2 years, implying ~1.86x computing power
  • Exascale computing achieved 2022, zettascale targeted by 2030 at 10^21 FLOPS
  • Quantum supremacy demonstrated by Google Sycamore at 53 qubits, 200s vs classical 10k years

Computing power has exponentially increased from primitive kiloFLOPS to modern exaflop supercomputers.

CPU and GPU Performance

1AMD Ryzen Threadripper PRO 5995WX scores 100 GFLOPS peak single CPU FP64
Verified
2Intel Core i9-13900K achieves 1.7 TFLOPS FP32 peak with AVX-512
Verified
3NVIDIA H100 SXM GPU delivers 67 TFLOPS FP64 Tensor Core performance
Verified
4AMD Instinct MI300X GPU reaches 163.4 TFLOPS FP16 peak
Directional
5Apple M2 Ultra chip peaks at 31.6 TFLOPS FP32 GPU performance
Single source
6Intel Xeon Platinum 8592+ offers 2.9 TFLOPS FP64 per socket peak
Verified
7NVIDIA A100 80GB GPU achieves 19.5 TFLOPS FP64 with Tensor Cores
Verified
8AMD EPYC 9754 (Genoa) peaks at 3.2 TFLOPS FP64 dual socket
Verified
9Qualcomm Snapdragon 8 Gen 2 GPU at 3.2 TFLOPS FP32 peak for mobile
Directional
10IBM Power10 processor delivers 5 TFLOPS FP64 per chip
Single source
11NVIDIA RTX 4090 GPU reaches 82.6 TFLOPS FP16 peak shader
Verified
12AMD Radeon RX 7900 XTX at 61 TFLOPS FP32 peak performance
Verified
13Intel Arc A770 GPU delivers 17.2 TFLOPS FP16 peak
Verified
14ARM Neoverse V2 core in AWS Graviton3 peaks at 0.4 TFLOPS FP32 per core
Directional
15Google Tensor Processing Unit v4 (Trillium) achieves 2.7 petaFLOPS FP16 per pod
Single source
16Cerebras Wafer-Scale Engine 2 (WSE-2) delivers 20 petaFLOPS FP16 AI performance
Verified
17Graphcore IPU Colossus MK2 GC200 at 350 TOPS INT8 per chip
Verified
18SambaNova SN40L chip reaches 2 petaFLOPS FP16 per card
Verified
19Tenstorrent Grayskull at 114 TOPS INT8 peak performance
Directional
20SiPearl Rhea CPU for HPC peaks at 1.7 TFLOPS FP64 per socket
Single source
21Frontier's HPE Slingshot-11 interconnect enables 200 Gb/s per node for calculating power
Verified
22NVIDIA DGX H100 system with 8 H100 GPUs reaches 32 petaFLOPS FP8 AI
Verified
23AMD MI250X dual-GPU die at 47.9 TFLOPS FP64 peak
Verified
24Intel Ponte Vecchio (Max 1550) GPU at 56 TFLOPS FP64 Tensor
Directional

CPU and GPU Performance Interpretation

This dizzying array of silicon bragging rights reveals a computational arms race where the only universal truth is that your laptop is now officially a glorified abacus compared to these number-crunching behemoths.

Current Supercomputers

1Frontier supercomputer holds the current TOP500 #1 at 1.194 exaFLOPS Rmax as of June 2023
Verified
2Aurora supercomputer ranks #2 at 1.012 exaFLOPS Rmax in June 2023 TOP500 list
Verified
3Eagle supercomputer at 561.2 petaFLOPS Rmax, #3 on June 2023 TOP500
Verified
4Fugaku at 442.0 petaFLOPS Rmax, #4 in June 2023
Directional
5LUMI at 381.0 petaFLOPS Rmax, #5 June 2023 TOP500
Single source
6Frontier's Rp peak is 1.707 exaFLOPS in June 2023
Verified
7El Capitan projected at over 2 exaFLOPS, but current Leonardo at 233.3 petaFLOPS #6 June 2023
Verified
8Alps supercomputer at 204.8 petaFLOPS #7 June 2023 TOP500
Verified
9MareNostrum 5 at 175.5 petaFLOPS #9 June 2023
Directional
10Frontier uses 37,888 AMD Instinct MI250X GPUs contributing to its exaFLOPS performance
Single source
11Aurora employs Intel Xeon Max CPUs and Data Center GPU Max for 1.012 exaFLOPS
Verified
12Summit at Oak Ridge has 27,648 NVIDIA V100 GPUs for 148.6 petaFLOPS Rmax current rank #13
Verified
13Perlmutter at NERSC delivers 64.6 petaFLOPS Rmax with AMD GPUs, rank #20 June 2023
Verified
14Frontier consumes 20.99 MW power for 1.194 exaFLOPS, efficiency 56.9 gigaFLOPS/W
Directional
15Japan's ABCI-Q at 95.2 petaFLOPS Rmax for quantum simulation, rank #26 June 2023
Single source
16China's OceanLite at 1.3 exaFLOPS AI performance but 125.4 petaFLOPS HPL #27
Verified
17Microsoft Azure Eagle at 561 petaFLOPS but HPL 561.2 petaFLOPS #3
Verified
18Nvidia-powered Isambard-AI at 132.0 petaFLOPS #18 June 2023 TOP500
Verified
19HPC6 at 110.4 petaFLOPS Rmax #24 June 2023
Directional
20AMD EPYC 7763 CPU in Frontier nodes contributes to overall calculating power
Single source
21HPE Cray EX architecture in Frontier enables 9.2 million cores
Verified
22Japan's Fugaku with A64FX processors at 442 petaFLOPS sustained
Verified
23European LUMI uses AMD MI250X GPUs for 381 petaFLOPS
Verified
24Selene at 63.5 petaFLOPS Rmax with NVIDIA A100 GPUs #33 June 2023
Directional

Current Supercomputers Interpretation

The global supercomputing race has now reached the exascale frontier, where power has become a public competition of precision engineering, national pride, and enormous electricity bills, all to make really, really difficult math look easy.

Energy Efficiency and Power Consumption

1Frontier supercomputer efficiency is 52.72 gigaFLOPS/W Green500 #1 June 2023
Verified
2Aurora at 49.03 gigaFLOPS/W #2 on Green500 June 2023
Verified
3Eagle achieves 46.18 gigaFLOPS/W efficiency #3 Green500 June 2023
Verified
4Alps at 40.60 gigaFLOPS/W #4 Green500, consumes less power per FLOP
Directional
5LUMI supercomputer at 38.99 gigaFLOPS/W #5 Green500 June 2023
Single source
6NVIDIA H100 GPU efficiency up to 1.98 TFLOPS/W FP64 Tensor
Verified
7AMD MI300X at 81.7 TFLOPS/W FP16 efficiency claimed
Verified
8Google TPU v5e at 2.5x better efficiency than v4, ~400 TFLOPS/W INT8
Verified
9Cerebras CS-3 wafer-scale at 1200 TFLOPS/W sparsity FP16
Directional
10Graphcore Bow IPU efficiency 500+ TOPS/W INT8
Single source
11Frontier total power draw 21 MW for 1.194 exaFLOPS
Verified
12Fugaku consumes 29.9 MW at 442 petaFLOPS, 14.8 gigaFLOPS/W
Verified
13Summit power 10.1 MW for 148.6 petaFLOPS, 14.7 gigaFLOPS/W
Verified
14Sunway TaihuLight used 15.37 MW for 93 petaFLOPS, 6.05 gigaFLOPS/W historical
Directional
15NVIDIA A100 SXM4 400W TDP for 19.5 TFLOPS FP64, ~48.75 GFLOPS/W
Single source
16AMD EPYC 9754 400W TDP dual socket ~8 GFLOPS/W FP64
Verified
17Intel Xeon 8592+ 350W TDP ~8.3 GFLOPS/W FP64 per socket
Verified
18Apple M1 Max 60W for 10.4 TFLOPS FP32, 173 GFLOPS/W GPU
Verified
19Qualcomm Snapdragon 8 Gen 2 5nm process 4nm effective, ~640 GFLOPS/W mobile GPU
Directional
20IBM Power10 at 20.6 gigaFLOPS/W in TOP500 systems
Single source
21SiPearl Rhea 2.0 nm process target 50+ GFLOPS/W FP64
Verified
22El Capitan projected 2 exaFLOPS at under 30 MW, ~66 gigaFLOPS/W target
Verified

Energy Efficiency and Power Consumption Interpretation

In the relentless pursuit of computational might, the supercomputing arena reveals a stark hierarchy of efficiency, where the Frontier system's crown for doing the most with each watt is being challenged by specialized accelerators claiming efficiency numbers so high they seem to belong to a different league entirely.

Future Projections and Theoretical Limits

1Moore's Law predicts doubling of transistors every 2 years, implying ~1.86x computing power
Verified
2Exascale computing achieved 2022, zettascale targeted by 2030 at 10^21 FLOPS
Verified
3Quantum supremacy demonstrated by Google Sycamore at 53 qubits, 200s vs classical 10k years
Verified
4IBM roadmap to 100k+ logical qubits by 2033 for fault-tolerant quantum computing
Directional
5Landauer limit theoretical minimum energy 2.8 kT ln2 per bit erasure ~3 zJ/op at room temp
Single source
6Dennard scaling ended 2006, but 3D stacking to continue power efficiency gains
Verified
7Optical computing could reach 10^15 FLOPS/W vs electronic 10^12
Verified
8Neuromorphic computing like Intel Loihi 2 at 10^12 ops/W synaptic
Verified
9Frontier to El Capitan 2x performance at same power by 2025
Directional
10AMD roadmap MI400 series 5x AI performance over MI300 by 2026
Single source
11NVIDIA Rubin platform R100 GPU 30x inference perf over Hopper by 2026
Verified
12Intel 18A process 1.8nm for Xeon 6 by 2025, 20% perf/W gain
Verified
13TSMC A16 1.6nm node 10% speed 15-20% power reduction 2026
Verified
14Quantum annealers like D-Wave Advantage 5000+ qubits solve optimization 10^6x faster
Directional
15Photonic chips Lightmatter Passage 36 petaFLOPS FP16 at 10 kW
Single source
16Global supercomputing capacity to hit 10 exaFLOPS aggregate by 2025
Verified
17AI training FLOPS doubling every 6 months, 10x every 2 years per OpenAI
Verified
18Bekenstein bound limits info density 10^69 bits/m^3 black hole, theoretical compute limit
Verified
19Reversible computing could approach Landauer limit, 10^42 ops/J theoretical
Directional
20Planetary computing limit Bremermann's 10^50 FLOPS/kg matter
Single source
21Margolus-Levitin theorem 6×10^33 ops/J energy-time limit per op
Verified
22ExaEnergy project targets 60 gigaFLOPS/W sustainable by 2030
Verified
23Post-Moore photonics-neuromorphic hybrid 1000x efficiency by 2040
Verified

Future Projections and Theoretical Limits Interpretation

We are in the breathtakingly clever phase where our computing ambitions have outpaced even our best metaphors, simultaneously chasing the ghostly potential of quantum supremacy and the sobering physical limits of thermodynamics, all while patching the fading legacy of Moore's Law with a dazzling quilt of quantum, photonic, and neuromorphic architectures.

Historical Milestones

1The ENIAC computer, completed in 1945, had a peak performance of approximately 0.0000001 gigaFLOPS (100 kiloFLOPS)
Verified
2The Manchester Mark 1, operational in 1949, performed about 1.2 kiloFLOPS in floating-point operations
Verified
3The UNIVAC I, delivered in 1951, achieved around 0.000001 gigaFLOPS (1 kiloFLOPS) peak performance
Verified
4The IBM 701, introduced in 1953, delivered approximately 0.000016 gigaFLOPS (16 kiloFLOPS)
Directional
5The CDC 6600, launched in 1964, reached 3 megaFLOPS peak performance
Single source
6The Cray-1 supercomputer, released in 1976, had a peak speed of 160 megaFLOPS
Verified
7The Cray X-MP, introduced in 1982, achieved up to 940 megaFLOPS in multi-processor configuration
Verified
8The Connection Machine CM-5, deployed in 1991, scaled to 1.056 teraFLOPS with 1024 processors
Verified
9ASCI Red, completed in 1997, became the first teraFLOPS supercomputer at 1.338 teraFLOPS
Directional
10ASCI White, operational in 2000, peaked at 7.226 teraFLOPS
Single source
11Earth Simulator, launched in 2002, achieved 35.86 teraFLOPS on TOP500 list
Verified
12Blue Gene/L, reached 280.6 teraFLOPS in 2006
Verified
13Roadrunner supercomputer hit 1.026 petaFLOPS in 2008
Verified
14Tianhe-1A achieved 2.566 petaFLOPS in 2010
Directional
15Fujitsu K computer reached 10.51 petaFLOPS in 2011
Single source
16Titan supercomputer delivered 17.59 petaFLOPS in 2013
Verified
17Tianhe-2 peaked at 33.86 petaFLOPS in 2014
Verified
18Sunway TaihuLight achieved 93.01 petaFLOPS in 2016
Verified
19Summit supercomputer reached 122.3 petaFLOPS in 2018
Directional
20IBM Power9-based Sierra hit 94.64 petaFLOPS in 2018
Single source
21Frontier became the first exaFLOPS machine at 1.102 exaFLOPS in 2022
Verified
22The first TOP500 list in June 1993 was topped by TMC CM-5/1024 at 59.7 gigaFLOPS
Verified
23Intel Paragon XP/S 140 at 143.4 gigaFLOPS topped November 1993 list
Verified
24Numerical Wind Tunnel at 170.0 gigaFLOPS in June 1994
Directional
25Intel Paragon at 281.0 gigaFLOPS in November 1996
Single source
26ASCI Red at 1.340 teraFLOPS in June 1997
Verified
27ASCI Red sustained 1.064 teraFLOPS in November 1997
Verified
28ASCI Red at 2.382 teraFLOPS in June 1999
Verified
29ASCI Q reached 4.944 teraFLOPS projected in November 2001
Directional
30Earth Simulator at 35.860 teraFLOPS in June 2002
Single source

Historical Milestones Interpretation

The breathtaking speed at which we've rocketed from needing an entire room to calculate a single artillery trajectory to casually simulating supernovas on a desktop proves that humanity's appetite for computational power is the ultimate exponential curve—our ambitions outrace our machines almost as soon as we build them.