Chaos Lever Podcast

Get ready for an electrifying ride through the history and evolution of supercomputers! From the groundbreaking Cray-1 to today’s mind-blowing El Capitan, we cover it all—with tangents about Interstellar, floating-point math, and why your iPhone is basically a mini-supercomputer. Join Ned and Chris as they unleash chaos on computing history! 🤖💻✨

What’s Inside:  
📚 Seymour Cray: The genius who made supercomputers... and left his own companies  
🔥 From mega-flops to exa-flops: Explaining performance in layman’s terms  
🌍 Supercomputers solving global warming (well, trying...)  

Take Our Survey!
💬 We want to hear from you! Visit https://chaoslever.com/survey and share your thoughts before the month ends.

Links:
- TOP 500 Linear Graph: https://en.wikipedia.org/wiki/TOP500#/media/File:Supercomputers-history.svg
- FLOPS: https://en.wikipedia.org/wiki/Floating_point_operations_per_second
- TOP 500 64th edition: https://top500.org/lists/top500/2024/11/highs/
- Cray History Timeline: https://cray-history.net/2021/07/19/cray-timeline-documents/
- Cray Background: https://www.invent.org/inductees/seymour-cray 
- Cray-1: https://en.wikipedia.org/wiki/Cray-1
- Sneakers scene: https://youtu.be/coDtzN6bXAM?si=JvAwJb416pxRuANq&t=43 

Chapters:
00:00 Welcome, Alleged Humans  
01:28 Interstellar: Did We Learn Anything?  
04:15 What Makes a Supercomputer "Super"?  
10:12 Seymour Cray: A Singular Visionary  
23:08 The Cray-1: Breaking Barriers in 1975  
36:40 From Supercomputers to HPC Clusters  
48:55 Modern Era: GPUs, Exa-Flops, and El Capitan  
01:20:00 Fun Facts: Your iPhone = A Supercomputer in 2010?!  
01:40:00 Wrapping It Up: Sneakers and PlayStation Supercomputers  

Let us know your favorite fact from the episode in the comments! 👇

What is Chaos Lever Podcast?

Chaos Lever examines emerging trends and new technology for the enterprise and beyond. Hosts Ned Bellavance and Chris Hayner examine the tech landscape through a skeptical lens based on over 40 combined years in the industry. Are we all doomed? Yes. Will the apocalypse be streamed on TikTok? Probably. Does Joni still love Chachi? Decidedly not.

[01:00:00.23]
Ned: Sometimes things happen, and you will never know why. That's a life lesson there, honey.

[01:00:08.15]
Chris: Yeah. Unknowable facts are all around us.

[01:00:12.29]
Ned: It's funny because in a way, that's become less true because of our self surveillance. The fact that we just record almost everything and log everything, it's almost offensive that information goes missing now.

[01:00:30.06]
Chris: So what I'm taking away from all this is you knocked over the book.

[01:00:46.25]
Ned: Hello, Alleged Human, and welcome to the Chaos Lover podcast. My name is Ned, and I'm definitely not a robot. I'm a real human person who does not float through walls and knock over books randomly and display stuff animals? That would be strange. Who would do such a thing? A robot ghost, that's who? That's not me. With me is Chris, who's also here.

[01:01:12.02]
Chris: I feel like you've learned all the wrong lessons from Interstellar.

[01:01:18.08]
Ned: You assume I learned any lessons from Interstellar.

[01:01:23.16]
Chris: Yeah, that movie is a trip because you watch it and you're like, I understand things. And then it's over and you're like, I understand less than when I started.

[01:01:32.24]
Ned: Yes. It's amazing how much of that movie is right and then how much of it is just complete garbage.

[01:01:43.21]
Chris: Look, that is not a nice thing to say about Matt McConaher's acting.

[01:01:48.19]
Ned: The thing that was true is that love transcends time and place.

[01:01:53.13]
Chris: Well, that's true. The other thing that is true is my eye is extremely itchy, and it's upsetting me.

[01:02:02.06]
Ned: Have you tried love? No. All right.

[01:02:09.03]
Chris: What I heard was codeine, and I don't think you're that far off.

[01:02:14.03]
Ned: I did not know what Codine was for a very long time, aside from it being a line from, I forget which movie, but somebody's talking about putting Codine on their pancakes. And I was like, Is that a bit of I don't know. I mean, it could be. Maybe it comes in a sip form. Listeners, if someone can tell me what movie that comes from, I would be gracious because I honestly don't remember, and I would like to know.

[01:02:46.10]
Chris: Probably one of the Barney films.

[01:02:49.12]
Ned: I love you. You love Codine. A Purple Adventure. While you're also doing things, dear listener, we have a listener survey which closes at the end of the month. We would like to hear from you. We've actually gotten a decent number of responses, but we need more because we're greedy.

[01:03:07.20]
Chris: Keep saying we.

[01:03:09.18]
Ned: I'm greedy. There we are. I should speak in I statements. Go to chaoslover. Com com/survey. Take the survey. You will get our gratitude? Yes, we'll go with gratitude.

[01:03:23.05]
Chris: Yeah, let's go with that.

[01:03:26.23]
Ned: Shall we?

[01:03:28.01]
Chris: Sure.

[01:03:29.02]
Ned: Excellent. I wrote a lot. What? As is tradition. I know. I did my best to keep this a one-parter instead of a two-parter, and I succeeded, but barely. We are going to be talking... With consequences. We're going to be talking about the history of supercomputers. Here we have another instance of a lightning round. We don't call them lightning rounds anymore, but we used to. The Tech News of the Week articles. Another Tech News of the Week article that turned into a full-length episode. That's a theme. A couple of weeks ago, I talked about the newest inhabitants of the Lawrence Liverpool National Laboratory. It is the supercomputer, El Capitan. They name everything off a mountain, that's for a person. That got me wondering about the history of supercomputers, which led to a lot of YouTube videos and a modicum of actual reading. Listening. Let's be serious. Today, I thought we could touch on what supercomputers are, how they defer from regular old computers, and maybe even what they're used for, though, spoiler alert, I didn't get that far. Maybe that's part two. I don't know. What is a supercomputer? Chris, why don't you take a wildly uninformed guess?

[01:04:54.07]
Chris: It's a computer from a different planet that, powered by the Yellow Sun can fly.

[01:05:05.04]
Ned: Oh, whatever you just said was wrong, and I hate you. A Paul on both your houses. Well, your house, the one. Just the one. But if you buy a second house, Paul on that one, too.

[01:05:16.07]
Chris: I mean, technically, it's more of a lean, too, but I'll take it.

[01:05:20.10]
Ned: A supercomputer is, simply put, a computer with a higher level of performance than your average general purpose computer. That's pretty dull, so I'll go on to say that a supercomputer typically processes calculations in a massively parallel way and is also typically constructed of tens or hundreds of nodes working in tandem, at least as it's envisioned today.

[01:05:45.05]
Chris: What you're telling me is you can't actually give me a definition for supercomputer, but you know it when you see it.

[01:05:52.02]
Ned: God, yeah. That and pornography. The standard unit of measurement when it comes to supercomputers is the flop or floating point operations per second. It used to be millions of instructions processed per second, which was called Mips, which is more fun to say, but is less relevant to how powerful the computer actually is. I'll explain a little bit about why. We've talked before on this show about the difference between risk and CISC processing. Essentially, risk or reduced instruction set computing Processing means that the actual processor's instruction set is smaller. It has less native instructions available, so you might have to use several instructions together to achieve a desired operation. Whereas CISC has more available instructions, so it has the potential to do the job faster, but it's also harder to program for and can be much bugier. X86 is an example of what was CISC, but they are turning it into risk, it's fuzzy. Mips is highly dependent on which of those architectures you choose to go with and which programming language is used on it. Mips is really good for comparing similar processors that are running the same workload, but not really great for comparing two different supercomputers that might use different processors, architectures, and languages.

[01:07:29.21]
Ned: The flop, on the other hand, is a measurement of how many floating point operations a system can complete in a given interval. I had to look this up because I didn't really know what a floating point operation actually is. It involves combining two floating point numbers, which is how computers represent very large or very small numbers. We're talking something that requires more than 10 or 12 digits to represent. It's It's like scientific notation where you have something along lines of 3.14e to the 10th, which is a shortened version of 31 billion, 400 million. But you lose some accuracy in there because they're It might be digits after that 3.14. That's how computers represent really big numbers, and there is a limit to the level of accuracy that they have, which can lead to strange behaviors, adding one... It's something like adding one plus one plus one plus one, and you end up with this weird remainder at the end. I don't know if you've seen that before, Chris.

[01:08:41.09]
Chris: I thought it was a division where you...

[01:08:43.18]
Ned: Oh, yeah, it is a division.

[01:08:46.12]
Chris: But anyway, instead of getting a whole number, you ended up with 3.999999999999999994. Right.

[01:08:52.10]
Ned: Even though you're dividing 12 by 4. What? It's just because of the way that it represents numbers on the back end using these floating points. Right.

[01:09:01.25]
Chris: Anyway, that's- It's super great because pretty soon we're all going to use a computer to do our taxes.

[01:09:06.07]
Ned: That'll work out great for everyone. Honestly, this whole floating point calculation thing could be its own episode and probably will be at some point. Not it. The point is that supercomputers use flops to measure their speed. That's what we're going to use as we get into the difference of supercomputers out there. I should note that the earliest supercomputers didn't really have floating point arithmetic, so those systems just used mips. That's all they had. Okay. A supercomputer is just a really fast computer. In fact, they don't really call them computers in the industry as much. They call them high-performance computing clusters. But I'm going to posit that supercomputer is more fun to say, and it makes my inner child happy, so they can shove it. What was the first super computer? Of course, that could be... No. Well, I mean, yes. Maybe. The thing is, it could be a subject of a lot of debate. Again, we have a definition of supercomputer, but it's a little vague. You could point at different computers, mainframes, really, and go, That's the first one. But the thing is, you first need a regular computer to compare it to. At the early advent of computing, there were no normal computers, or at least, I guess they were all normal computers, or at least they were special.

[01:10:40.05]
Ned: They were special. But super? Compared to what? Your uncle Esteban, who can multiply three-digit numbers in his head? I mean, great party trick.

[01:10:50.16]
Chris: That's super hard. Super hard.

[01:10:54.01]
Ned: That's why Esteban gets the biggest slice of cake. Some of the earliest computers were the ENIAC in 1945 and the UNIVAC in 1951. But these early computers didn't natively do floating point math. Like I said, there was nothing to compare them to, which is why we have to wait for the mid mid '70s, and what's generally considered to be the first well-known supercomputer, the Cray 1. So get ready. We're going to be talking about Cray for a while. This should not be surprising. If you've ever heard of supercomputers, there's a good chance you've heard of Cray. You've heard of Cray, right?

[01:11:39.11]
Chris: I mean, I've actually seen them in person before. So no big deal.

[01:11:48.26]
Ned: The Cray One was released in 1976 by Cray Research. That was a company founded by Seymour Cray. Not only was the company founded Founded by Mr. Cray, he was also the lead architect of the system, along with his co-founder and chief engineer, Lester Davis. I ended up going down a bit of a rabbit hole on this because Seymour Cray turns out to be a pretty interesting person in his own right. By all accounts, brilliant engineer. Not necessarily a people person. He was born in 1925 in Chippewa Falls, Wisconsin, and he did his best to stay in that general Midwest area for most of his life. His dad was a civil engineer, and Seymour followed in his footsteps, eventually earning a BS in electrical engineering and a master's in applied mathematics. Before he did that, he was drafted during World War II into the Navy as a radio operator and then ended up being a codebreaker. His experience of being a codebreaker led him to land a job at Engineering Research research associates, which had formed from the Navy laboratory that was responsible for building code-breaking machines during the war. It all seems to come back to cryptography.

[01:13:10.21]
Ned: We should really do some episodes about that.

[01:13:13.10]
Chris: How do you feel about a 12-part series?

[01:13:15.19]
Ned: We're two parts in, baby. Era was eventually bought by the Sperry Rand Corporation, and ERA became part of the Univac division. Oh, hey, there's Univac. How about that? Rand stayed at ERA until 1957 when he left with some of his coworkers to start the Controlled Data Corporation, or CDC. Seymour had what you could call a singular session to make systems that were as fast as possible using somewhat commodity hardware. He had the insight that it wasn't enough to make a faster processor, you had to make every peripheral fast, too. So everything in the system worked in the service of making sure that the CPU was never idle and never waiting on the next bit of information from central memory. Central memory, yet another topic that could take up an entire episode. That's it.

[01:14:17.26]
Chris: I wasn't listening. Excellent.

[01:14:22.17]
Ned: While at CDC, Frey designed the wildly successful CDC 6600 mainframe, which outperformed the comparable IBM stretch, 70/30, by a factor of three. Ibm was nonplust about this. The head of IBM at the time really took his engineers to task, asking how a company that had a total of 36 people working at it beat out IBM. Trey said, The answer is in your question. The first 6,600 were sent to Los Almos and Livermore, which is not the last time that a cutting-edge computer will end up at either of those research facilities. For some, they call the CDC 6600 the first successful supercomputer. Like I said, that label is a fuzzy one, and we're waiting for the Cray one to really grant that crown. What it can tell you is that the 6600 cost about $23 million in 2024 money, and it could run at a blazing past two MIPs, which, remember, is million instructions per second. Two may seem small, but that's two million instructions per second. Not bad. The CDC 6,600 was also one of the first machines to move away from germanium-based transistors to silicon transistors. The reason is that silicon transistors, which were more difficult to create at the time, could run at a higher clock speed, and the 6,600 was running at 10 megahertz.

[01:16:05.12]
Ned: Fray also changed the architecture of the computer, creating a CPU with a reduced instruction set, hey, risk, and adding peripheral processors that would handle some of the tasks that were typically taken on by that CPU, like handling I/O and the central memory. He reduced the instruction set and then moved those instructions to other processors and said, You handle that stuff. Cpu is just crunch numbers. All this contributed to a processor that could run faster and process more instructions than any other computer of its era. It really did take the crown of fastest, most performant computer at the time. It could also do floating point calculations using 60 bit numbers. It was able to perform at about three mega flops, which is three million floating point operations per second. Cray was not a person who was easily satisfied. Him and Alexander Hamilton have that in common. Listening to a lot of Hamilton soundtrack recently, not my choice. He iterated on the 6,600 with the 7,600. But what he realized was that while designing the 7,600, he had hit a limit with the architecture of the system, and he wanted to start completely from scratch with the 8,600.

[01:17:31.01]
Ned: Unfortunately, the CDC was unwilling to bankroll such a venture, so he left his own company to start Cray research and develop the Cray-1. You might be wondering, this is his company. He started it. Why wouldn't he just stay there and push for the 8600? Well, he wasn't the president of it. He also thought that the company had gotten too big and too overburdened with management. This company was still 200 people, but it was too much for him. So he's like, I'm leaving to form a smaller company. This is the beginning of a pattern. The 8600 really had no reason to exist though. The 6600 cost $23 million in today's money, and they sold about 100 units. The 7600 sold only about 75 units, and it ran at 15 Mips and 36 megaflops. Fray wanted to increase that performance by a factor of 10. But honestly, there weren't enough buyers out there to warrant such a thing. The existing businesses wanted the new systems to be compatible with the older system's code. If you've poured millions and millions of dollars into this system and written software for it, you don't want to buy a new system and none of your software works.

[01:19:00.10]
Chris: It's the old problem of basic iterative progress is more stable, more manageable, but you don't get the... What did he want? 15 times faster?

[01:19:13.11]
Ned: 10 times faster.

[01:19:14.24]
Chris: I'm not a math major. Settle down. But you don't get that without starting from scratch, taking lessons learned and doing something radically different.

[01:19:23.12]
Ned: We talked about this in the mainframes episode where IBM introduced the system 360, and they They started making it so each new iteration was fully compatible with the previous iterations, mostly through the operating system and Unix, but make it compatible that you didn't have to rewrite your programs every time you bought a new mainframe. Same thing with computers. So CDC just wanted to focus on incremental improvement with the CDC Star 100, and also start making workstations that leverage the same technology that they established in the '66 and 7,600 systems. Workstations you can sell more of, maybe make more money. Seymour, he wanted none of that crap. Chris, have you ever seen the movie Sneakers?

[01:20:13.08]
Chris: Of course I have.

[01:20:15.08]
Ned: I was going to have to berate you for the next half an hour if you hadn't. I assume you would do that regardless. For listeners out there, for listeners who haven't, you should. It's excellent. Great cast, interesting premise, and it's a heist movie, which is the best movie. There's a scene in it where Ben Kingsley, he's the semi-evil version of Bill Gates in the movie. He's sitting in a server room, and the server's look like these weird circular benches with a main column rising up out of the middle. When I saw the movie, that was an image that stayed with me. I mean, in part because Ben Kingsley is a compelling actor. But also, I was like, That's what servers look like. Those weird server benches, they were based on the design of the Cray 1 and subsequent Cray supercomputers. Why did they look like that? The semicircular design allowed engineers to minimize the length of wires connecting the peripherals to the CPU. Longer wires introduced latency in the system, and latency was bad. It also helped with cooling these systems, which got ridiculously hot. Creez used liquid cooling, and later models like the Cray 2 used the inert liquid, a substance that, despite its name, was not totally inert at high temperatures and produced a toxic gas that had to be filtered by catalytic converters, lest its poison kill the operators who constantly had to the machine.

[01:22:02.02]
Chris: It feels less than ideal.

[01:22:04.04]
Ned: Not great. But that chimney design allowed the heat to come up through the chimney and the liquid chillers could keep the system from melting, essentially.

[01:22:16.29]
Chris: Now, that's a good reason or explanation, but is the real reason that he was inspired by the TARDIS?

[01:22:23.23]
Ned: Maybe. I mean, it just-We have to be honest with ourselves and others. It just looked cool, Seymour took four years to design and build the first Cray model, and excitement over it was so high, to the surprise of many, that there ended up being a bidding war between Livermore and Los Alamos over who would get the first unit. Released in 1975, the Cray-1 ran at 80 megahertz and was capable of 160 megaflops, which was almost five times faster than the CDC 7,600. Not quite the 10X that Seymour was aiming for, but still, not bad.

[01:23:06.23]
Chris: Still good. Yep.

[01:23:09.26]
Ned: Cray research set a high price tag of $8 million in 1975 money, which is close to 40 million in today's money. They didn't think they were going to sell more than a dozen units. Turns out that the system was released at just the right time. There were some political things happening in in the background, and they ended up selling over a hundred of these systems. Cray became synonymous with supercomputing, and even if the CDC 6600 was technically the first supercomputer, the Cray one was what captured the public's attention and put Cray on the cover of magazines, both the computer and the man himself. Chris, this is already in danger of becoming a two-parter, so I'm going to stop talking about Seymour for Cray for a bit. But I'll finish it out by saying that he went on to design the Cray-X-MP, which was a multi-processor, so it had more than one processor in it, and the Cray-2, before getting frustrated with his own company again and founding a new company called Cray Computer Corporation to develop the failed Cray 3 that used gallium-arsenide chips. Only one was ever produced. Later, he went on to form another company called SRC in 1995, and he was trying to develop a massively parallel machine to compete with other supercomputers of the time.

[01:24:42.11]
Ned: But before anything concrete could be developed at SRC, he was tragically killed in a car accident at the age of 71. Still, at 71, he was still founding new companies and designing supercomputers. The man had a singular vision.

[01:25:00.25]
Chris: A parallel vision, some might say.

[01:25:02.23]
Ned: Sometimes. Let's talk about supercomputers of the modern era. Cray was focused on building a single unified machine that could do a ridiculous number of calculations. He developed custom components. His companies innovated on things that didn't exist at the time at all, like fully integrated circuits and stacking circuits to reduce wire length. But eventually, he lost out to commodity hardware in the 1990s. Cray wasn't the only player in the game, and companies like Intel had designed their own supercomputers based on custom chips. But eventually, people came to realize that you could just take a bunch of really fast, relatively cheap nodes, connect them via a fast enough network, and farm out jobs in a massively parallel manner. To a certain degree, this was the end of the true supercomputer era. Once Once you break up a supercomputer into nodes, it's not really a single computer anymore. It's a cluster. That's what we're going to be talking about from here on out. It's no longer these giant machines with one single central processing unit. It's distributed work on a bunch of... The Accelerated Strategic Initiative, Red Supercomputer came online in 1997, and it was built from Intel X86 Pentium Pro processes.

[01:26:34.20]
Ned: The ASCII Red was capable of delivering more than one teraflop, which was more than any Cray machine of the time. At this point, At this point, the term high performance computing becomes more relevant, and maybe also supercomputers after this point are more accurately HPC clusters. But again, I like the supercomputer term, so we're just going to keep it there. X86 wasn't the only game in town. Ibm and the power processor continued to be a force in HPC, and also some designs that were based off of the ARM infrastructure, not infrastructure, instruction set architecture. The single system with a central processing unit was over, though, and an arms race started for adding more cores to a CPU, more CPUs to a node, and more nodes to a cluster. I'm going to give you an idea. Here's a few highlights of the fastest supercomputer of its time for each year. In 1997, we have the ASCII Red, but it has 9,000 processors in it, 1,200 gigabytes of RAM, and one teraflop of performance. Okay, so teraflop, that is it goes mega flop, gigaflop, teraflop. We're already at, what is that, like a trillion? Yeah, I think that's a trillion.

[01:27:59.24]
Ned: In In 2000, we have the successor to red Asky White, which has slightly less processors, but more cores on those processors, 6 terabytes of RAM. So five times what the previous one had, and it was capable of doing 12 teraflops, so up by a factor of 10. 2002, we have the first Japanese supercomputer to take the crown, the NEC Earth Simulator, which had 5,000 processors cores, different architecture, more cores, 10 terabytes of RAM, and it was capable of doing 40 teraflops. That was then unseated by IBM's Blue Gene L in 2004, which had 13,000 processors. They don't specify how much memory, I'm guessing it's a lot, but more importantly, it was capable of doing 596 teraflops. Again, that factor of 10 going up. 2008, IBM took their own crown and put it on a new one, the IBM Road Runner, which had 12,000 processors, 103 terabytes of memory, and it was the first system to hit one petaflop, which is a thousand Teraflops, which is quadrillion? A lot. I think quadrillion. It's a lot. Then 2009, we have the Cray XT5 Jaguar or Jaguar, with 37,000 Optron processors from AMD, 360 terabytes of RAM, and it was capable of doing 1.75 petaflops.

[01:29:43.21]
Ned: To round it all out, in 2010, the supercomputing crown was taken by the first Chinese supercomputer, Tianhe 1A, which had 14,000 Xeon processors, and more importantly, 7,000 Nvidia Tesla GPUs, allowing it to hit 2.57 petaflops. Now, the reason I point that one out is that the Tianhe 1A was the first to use GPUs as part of the processing load. This is a trend that will continue for future HPC clusters with the GPU taking on more and more computational burden. Not to say that CPUs slouch in and not doing anything, but the GPUs are known for being able to do massively parallel vector math, and that something that some types of high-performs compute find extremely useful. The crown for the most flops has continued to be passed around between US, Japan, China, and Madonna. Just really terrible albums in there. It was passed around for the next 14 years with the Top 500 list tracking the current top 500 Supercomputer clusters in the world. The 64th edition The definition of the list was published in November 2024. At the very top, and bring us all the way back to the original article that started this, El Capitan, installed at the Lawrence Liverpool National Laboratory.

[01:31:13.12]
Ned: In fact, right now, the top three systems on the list are all at Department of Energy sites. Usa, USA, USA? Usa? Maybe. Technically, El Capitan is an HPE Cray system named the EX255A. Now, Chris, you might be thinking, How is Cray still on this list? Well, Seymour left a lot of companies in his wake, as we've discussed. But the one we're talking about specifically here for HPE is the Cray Research Inc. Company that Seymour left in 1982. Cri continued to exist after he left, and it was acquired by Silicon Graphics Inc, also known as SGI, in 1996. They helped Pixar a lot out in the early days. But that acquisition was not to last, and SGI sold CRI off to Terra Computer Company in 2000. That ended up ushering in a new era for Cray, where they successfully surmounted the supercomputing top 500 with their XT product in 2009. They continued making systems for research laboratories across the world, and in 2019, HPE acquired them for $1.3 billion. At this point, Cray's secret sauce was less about the hardware and more about how they put all that stuff together. The XT5 and later systems used off-the-shelf processors from AMD, Intel, and Nvidia.

[01:32:55.23]
Ned: The line that they launched with HPE was the EX line, and you won't be to learn that it uses a combination of X86 processors and GPUs. What might raise your eyebrows is the sheer scale of it all. El Capitan, the current record holder, has 11 million cores. That is spread across 43,000 AMD Epic processors, along with matching AMD Instinct GPUs. It's a one-to-one. Every system has matching Epic processors and GPUs. There are 11,000 compute nodes with 5.4 petabytes of memory and an HPE slingshot switch, passing traffic around at 12.8 terabits per second. The peak performance of this system has been measured at 2.79 exaflops. That's a lot of numbers I just threw around.

[01:33:56.15]
Chris: Yeah, and they're all so large as to not have any meaning? Yes.

[01:34:04.00]
Ned: Let me try to put this in some context for everyone. We've gone from mega flops, which is 1 million floating point operations per second, which is already It's really staggering. The gigaflops, to teraflops, to petaflops, and now exaflops, which is 2.79 quintillion floating point operations per second, which is which is 279, followed by 16 zeros. Since the release of the CDC 6,600 in 1965, running at three megaflops, almost 60 years later, we've increased the order of magnitude 12 times. As a point of reference, think about a silicon wafer from which you might make a microchip. Those are about half a millimeter thick. If we had the same growth rate starting in 1965 to now, that silicon wafer would now be 0.11 light years thick or 1 trillion kilometers. The wafer would extend beyond Earth past the heliosphere, and we're not done yet. That's the growth we're talking about. Saggering. We are now firmly in the era of exascale computing. If we assume consistent growth of an order of magnitude every five years, and if you map it on a logarithmic scale, it is linear in terms of growth, but it has to be a logarithmic scale.

[01:35:48.13]
Ned: There's a really good chance that we're going to see the next step up, which is zeta flops, somewhere in the late 2030s. To reach that scale, we're going to have to vastly increase the throughput output of the hardware connecting the compute nodes as well as the power behind those nodes themselves. Silicon Photonics is a promising technology to reduce latency, but I'm going to be honest, Silicon Photonics is always about five years away, just like fusion, basically. Do you feel like you see an article about Silicon Photonics once every other year?

[01:36:24.01]
Chris: I haven't seen anything recently, but yeah, that was definitely the buzzword of the next hot new thing.

[01:36:30.09]
Ned: It has been several times. I saw a YouTube video last week, which is why I thought of it, and it was talking about a company. It was pretty clear the company sponsored the video. The technology is just around the corner, Chris. Any day now. High-performance computing pushes hardware to its absolute limits, and AI is taking that lead and just running with it. If AI is not a bubble, and it continues to need bigger and bigger models, maybe, the hardware supporting it is going to need to scale as well. High-performance computing is generally considered a niche product and domain. When you look at the systems that are sold under that moniker, we're talking about tens or hundreds of systems at the most. Ai might disrupt that, explode the niche, as it were. You might have guessed that these massive computing monstrosities also need a lot of power. You You're right. Ironically, many of them are being used to compute weather and climate models. In addition to the Big 500, there's also the Green 500, which is supposed to track the most energy-efficient supercomputers. I think they do a Watts per flop thing. Can we flops our way out of global warming?

[01:37:51.08]
Ned: I am skeptical. Another big thing in supercomputing that we might have covered a few times is quantum computing. Again, Chris, this is your fault. I'm not going to go too deep because I think we have about four episodes already covering quantum computing and quantum physics, but it will potentially help solve problems that are currently being crunched by El Capitan and friends. If that's the case, we might have to switch our measurements as flops are not really a thing in quantum computing. At this point, I've hit pages. I thought this was going to be a quick tour of supercomputing.

[01:38:36.28]
Chris: Did you, though?

[01:38:38.01]
Ned: No, not even a little. This is exactly how I saw it panning out. To close things out, a fun bit of trivia. The current generation of the iPhone 16 Pro is capable of 2.58 teraflops. That is equivalent to the processing power of the Tianhe 1A in 2010. Now, obviously, the Tianhe had more memory and storage and could do more in terms of parallelization. But let that sink in. The fastest supercom in 2010 and the current iPhone 14 years later, can perform the same number of floating point operations in a second. One costs an estimated $88 million, and your iPhone is free with a new Verizon account or some such nonsense. Honestly, I'd rather pay the $88 million. Well, thanks for listening or something. I guess you found it worthwhile enough if you made it all the way to the end. So congratulations to you, friend. You accomplished something today. You can go sit on the couch, fire up your Blu-ray copy of sneakers that I know you have and enjoy it with some popcorn. You've earned it. You can find us on LinkedIn. Just search Chaos Lever or go to our website, chaoslever. Com, maybe survey, where you can take our listener survey and also find show notes, blog posts, and general Tom Foulery.

[01:40:05.26]
Ned: We'll be back next week to see what fresh hell is upon us. Ta-ta for now. There was a fun aside about PlayStation 2s that I didn't even get a chance to get into, where one of the supercomputers that was developed in 2010 or something was just- It was a bunch of PlayStation 2s linked together. Yeah, they just wired a whole shitload of PS2s together, and it's actually ranked.

[01:40:40.03]
Chris: Yeah. Well, I mean, the fact of them being ranked is really just a dick measuring contest.

[01:40:47.22]
Ned: A dick measuring contest? Yeah.

[01:40:50.22]
Chris: People can create their own clusters and do HPC with off-the-schelf components. What's that, Uthanat, that costs a lot of money that I don't like? Infiniband. Yes. It's just the concept of parallelization that makes it HPC. Also, if you have a computer that's big enough, you might actually be able to pronounce the word parallelization.