Chaos Lever Podcast

In this episode, Ned and Chris examine classical computing fundamentals, breaking down complex topics like Turing machines, the von Neumann architecture, and the role of logic gates in computing. They explain how binary operations, logic gates, and transistors come together to form the foundation of modern computers. They also get into a discussion of reduced instruction set computing (RISC) vs. x86 architectures and the trade-offs between speed, efficiency, and complexity in modern processors.

Links:

xkcd Purity: https://xkcd.com/435/
Turing Machine: https://plato.stanford.edu/entries/turing-machine/ 
Von Neumann Architecture: https://en.wikipedia.org/wiki/Von_Neumann_architecture 
Half Adder: https://www.geeksforgeeks.org/half-adder-in-digital-logic/ 

What is Chaos Lever Podcast?

Chaos Lever examines emerging trends and new technology for the enterprise and beyond. Hosts Ned Bellavance and Chris Hayner examine the tech landscape through a skeptical lens based on over 40 combined years in the industry. Are we all doomed? Yes. Will the apocalypse be streamed on TikTok? Probably. Does Joni still love Chachi? Decidedly not.

[00:00:00.13]
Chris: So what you're saying is in 35 years, we're going to be talking about Skibbity Toilet?

[00:00:04.22]
Ned: Oh, God, I hope not. That was definitely an obsession that hit my son because he's the target demographic for that level of inaneity. And him and his buddy were like, Skibbity toileting for hours. And it was awful. I wanted to die, Chris. More accurately, I wanted to kill both of them.

[00:00:28.28]
Chris: But I'm resentful I love you even bringing it up.

[00:00:32.17]
Ned: Wait a minute. Hello, alleged human, and welcome to the Chaos Lever podcast. My name is Ned, and I'm definitely not a robot. I feel. I laugh. I cry at Kodak commercials. Remember those? I'm not worried about those tears producing rust on my very real skin. That would be positively bizarre. I'm annoyed at you for even implying such a thing. With me is Chris, who is also here. Hi, Chris.

[00:01:17.03]
Chris: Kodak Commercials? Now who's doing 30-year-old references?

[00:01:21.27]
Ned: Where's the drum effect when I need it? Oh, it's here. Oh, God, I hate it so much. I hate myself doing it.

[00:01:30.27]
Chris: It's not even a good recording.

[00:01:33.17]
Ned: No, that's probably the most disappointing part. Oh, jeez. Yeah, I was going to say Hallmark Commercials, but when I really thought about it, Hallmark Commercials have never made me cry. It's the Animal Shelter with Sarah McAgglackland.

[00:01:51.05]
Chris: The animal's make everyone sad.

[00:01:52.23]
Ned: The animal's. Puppies, mostly. And Kodak Commercials. Those are the ones that really get my warm and fuzzies. What gets you warm and fuzzy, Chris? That was the best possible response, I think. Fine. We'll talk about some tech garbage then.

[00:02:17.14]
Chris: That's the spirit.

[00:02:18.24]
Ned: All right. We have been talking about quantum computing a lot, and you've brought up classical- Everyone drinks.

[00:02:28.08]
Chris: Every time somebody says quantum, you have to take a drink.

[00:02:32.24]
Ned: Oh, excellent.

[00:02:34.05]
Chris: When's the last time you were drunk at nine o'clock in the morning?

[00:02:38.08]
Ned: Fortunately, this episode is not going to be one of those because in those quantum computing episodes, take a drink, we have mentioned classical computing as a comparison several times, with the assumption that most people understand how classical computing works. But I think it's good to question assumptions. It turns out, when you actually dig into it, most of us don't understand how classical computing actually works.

[00:03:10.20]
Chris: Can anyone explain anything?

[00:03:17.01]
Ned: There you go with philosophy again at 10 AM on a Friday, and I just think that's irresponsible. There's a good XKCD cartoon. Well, I mean, there's a lot of good XKCD cartoons, all of which have stuck in my brain over the years, and I can't get them out. But for the purposes of this episode, there's one good XKCD cartoon called Purity. In it, each scientific discipline claims another field is just an applied version of their field. So sociology is just applied psychology, which is just applied biology, which is applied chemistry, then physics, and then math. And math is all the way to the right, meaning it's the most pure, in scare quotes, of course. Although I would personally put philosophy to the right of math. But that's probably a conversation for another time. Again, it's too early for that shit. The reason I bring up this cartoon is because it's probably a good foundation on how CPUs actually work. There are a bunch of layers If we keep going to the right, as it were, we get down to the fundamentals of how atoms interact. As transistors get smaller and smaller, this becomes a non-trivial consideration.

[00:04:44.11]
Chris: It's not hyperbolic. That's literally what we have to deal with at the moment.

[00:04:48.12]
Ned: As we get down to the three and then the two nanometer size, quantum... Oh, God, I brought it up again. I'm so sorry. Quantum interactions and mechanics actually become a problem for transistors to function properly. But we're not going to get into that. I just wanted to talk about how CPUs and classical computing actually work because we keep bringing up quantum computing.

[00:05:17.26]
Chris: Now, of course- I just like everyone to note that Ned said quantum four times in 37 times. You're doing great.

[00:05:26.06]
Ned: That's it, though. That's the last time. I swear.

[00:05:28.19]
Chris: Last time of what?

[00:05:30.27]
Ned: You almost got me there. Now, we can't possibly pack all of classical computing into a single episode. I would say you need a whole computer science course to do that, but in reality, you actually need several computer science courses to really get into it, along with some materials engineering and a fair bit of calculus.

[00:05:52.20]
Chris: Fairly certain they call that a degree.

[00:05:54.22]
Ned: Yeah, there are so many aspects to get into, like materials engineering engineering, electromagnetic physics, circuit design, discrete math, logic systems, architecture, linguistics, syntax. We're not going to be able to pack that into the next 30 minutes. Go get a CS degree if you really want one. I have one. You see how well it's done for me.

[00:06:16.17]
Chris: You'll also learn Pascal for some fucking reason.

[00:06:20.01]
Ned: I didn't learn Pascal. So obviously, I went to a better school than you.

[00:06:25.08]
Chris: Better school?

[00:06:27.08]
Ned: Well, we actually went to the same school for a while.

[00:06:30.02]
Chris: Ned, you went to every school.

[00:06:31.13]
Ned: Like I said. I was putting together a bunch of different schools to make my own best school, and I did it.

[00:06:41.01]
Chris: And you ended up with Arizona State.

[00:06:43.01]
Ned: Oh, that is unfair and libelous. Anyway, so there's three big ideas that I want to mention when it comes to classical computing, which is Turing Complete machines, the von Neumann architecture, and binary and logic operations. We're going to cover those three. We're not going to go super in-depth. There is more. But let's start with the foundation for modern computing, which is the Turing Complete machine. Chris, what is a computer?

[00:07:17.15]
Chris: I thought we weren't doing philosophy.

[00:07:21.10]
Ned: If we try to dig down to the essentials of what a computer does, how simple can we actually make it? The goal here is to discover some basic foundational principles that we can build computing theory on. It really comes down to a three-step process, which is to load an instruction, decode the instruction, and then execute the instruction. It's also called fetch, decode, execute. But before we had that, we had Alan Turing working on the bomb machines in World War II, using them to crack the Enigma code. Based on that work, he developed the idea of an abstract machine that could solve any computing problem if it was given enough time. Now, before I describe that machine, let's do a little thought experiment. What does a computer actually do? Well, like I said, it takes a series of instructions and it executes them. That's it. That's all a computer does. We can call that series of instructions a program. It seems like a good enough term. Now, what does it need to execute a program? It needs a way to load the instruction, to read the instruction in one at a time. It needs somewhere to write the results of the execution.

[00:08:50.19]
Ned: It also needs a marker or something to store where it is in the process. What's the current instruction that it's on? If you take all To build those components and put them together, what you essentially have is a Turing machine. It's not necessarily a physical object, but it's a conceptual framework.

[00:09:09.23]
Chris: I thought you said this part was going to be complicated.

[00:09:12.14]
Ned: I know. It's so easy, right? The Turing machine was theorized in 1936, so the physical manifestation of such a machine is reliant on the available technology of the time. When he was theorizing this, to put it into in musical terms, he talked about paper tape, some mechanical electrical motors, and vacuum tubes or more likely mechanical relays being the components to build this machine. If Turing had been around, say, in the 1800s, he might have picked steam power and gears, and well, probably paper tape again. The machine has three components. It has a spoil of infinite tape that has symbols recorded and read on it. It has a tape head that is able to read and write those symbols on the tape, and it has some mechanism to advance the tape forward or backward. Each symbol that's on that tape is part of an alphabet corresponding to a table of rules that the machine follows. The machine has an internal state that changes based on the symbol currently being read. It It needs the symbol, that symbol tells it to do something in the table. It does that thing, and then it either advances the tape forward one or it moves the tape backward one.

[00:10:41.04]
Ned: That's it. That's all the machine does. You could say they're an idealized form of modern computing, and they include all the components required to execute any algorithm. Not quickly, but they could do it. That was the point, is this is a general purpose machine that if you can express something as an algorithm, you could program the machine to run that algorithm and potentially get a result, assuming the machine halts. We're not going to get into the halting problem because time. The life of Alan Turing actually has a very tragic ending for those who aren't aware of it. But if you do want to know more about it, please don't watch The Imitation Game. It's wildly inaccurate about almost everything, including Alan Turing himself. I've heard that codebreaker is better, but I haven't seen it. But the book that The Imitation Game is actually based on is probably the way to go. It's called Alan Turing: The Enigma, and it's next up in my audible cue. Have you read any biographies of him or have any feedback on where people could find more about Alan?

[00:12:05.20]
Chris: I mean, besides Wikipedia, obviously.

[00:12:07.29]
Ned: Well, besides that, yeah.

[00:12:10.03]
Chris: No, but I did watch a number of videos that absolutely destroyed the Imitation Game in terms of everything about it, more or less. They were like, the tale as old as time, right? It's a movie that's based on a true story, and the true story that they based it on was someone called Alan Touring was alive once. Everything after that you can consider conjecture, let's say.

[00:12:40.07]
Ned: I don't think it's quite that extreme, but yeah, they took liberties with virtually everything and made the story less interesting, almost, which happens a lot with those movies. Too many studio notes, usually. Anyway, Alan Turing created a theoretical computer that can solve any algorithm, but we're missing two important things, which is an actual architecture and physical implementation of the machine and that alphabet or that rule set that it's meant to follow. Let's get to those two things, starting with the von Neumann architecture. The von Neumann architecture is an application of Turing's machine, but I want to put in a few caveats first. For starters, Thuring was not the person thinking about building a programmable computer in the 1930s and early 1940s. Konrad Zuss, a German civil engineer, filed a patent in 1936 for a programmable computer. Jay Presper-Eckert and John Mouchley, sure, were working on ENIAC in 1943, which was not a fully programmable or stored program computer, but they had started working working on a successor to ENIAC called Edvack with von Neumann as part of the Manhattan Project in 1944. The first published draft that discussed the architecture, this was a first draft, it only bore von Neumann's name, but it would be folly to imply that he was the sole contributor to the work.

[00:14:23.15]
Ned: John was very aware of Thuring's ideas, as were many other people, and von Neumann himself never claimed the mantle as father of modern computing. He shares that honor with many other people. But unfortunately, people can only remember so many names, and his was at the top of the paper. That's why we call it the von Neumann architecture.

[00:14:48.04]
Chris: Well, there's also the fact that John von Neumann was a legend in his own time and is widely regarded as one of the most intelligent people who has ever lived in the history of Earth.

[00:15:00.29]
Ned: Well, God, there is that, too, Chris.

[00:15:02.26]
Chris: So, you know. Just a little fun fact for Friday.

[00:15:07.13]
Ned: Yes, that's what we're doing. Fun Friday facts. Yeah, and I actually read some of the first draft It's a reconstruction of the first draft, or at least a cleaned up one, because it was really a first draft. It wasn't complete. It's missing a whole bunch of sections. It hadn't been proofread. It hadn't been spell checked since that wasn't a thing. It was transcription of handwritten notes from von Neumann that someone cleaned up later. It's very dense. I got about six pages in and I was almost asleep because my brain just hurt so much and was like, We're shutting down now. Go read it if you want. Link in the show notes, of course. Early calculating devices, like say the Babbage difference engine, they were able to run a fixed program that was configured by physically constructing circuits. Even systems like Colossus and the ENIAC that I just mentioned, those were programmed by altering the physical layout through switches, plugboard wiring, or patch cables. These were not general purpose machines. They could do one thing very well, like say, create ballistics tables, but you couldn't use them to play Dome, at least not easily.

[00:16:31.14]
Chris: Yeah, I mean, considering they didn't have a monitor, I think is a disqualifying factor.

[00:16:37.21]
Ned: I suppose if they could run the ticker tape fast enough, maybe... No. Okay.

[00:16:43.20]
Chris: They could turn Dome into a Choose Your Own Adventure novel. Oh. Oh, shit. We're going to get sued. I don't... Oh. Did you know those people copyrighted the name Choose Your Own Adventure?

[00:16:54.03]
Ned: I did, and they're very litigious. Anyway, so what Turing expressed with his machine and von Neumann with his architecture was a stored program computer, one that can hold both data and a program on some type of media and then execute that program. What the data and the program are are arbitrary, and they can really be anything, which is why on a modern computer, you can play, you can play, you can write a word document, you can attend a Zoom call all on the same device, maybe at the same time. At the same time. I am, of course, talking about your WiFi-enabled crockpot. The von Neumann architecture is composed of a few different key components. We had our theoretical machine before from Turing. This is less theoretical, but still a bit abstract. We've got an input device and an output device. Could be the same device, could be two separate devices. But honestly, how else are you supposed to load a program and receive results? The actual form of the device is not relevant right now. Then we have a central processing unit, CPU, and a separate memory unit. The memory unit holds data and instructions, and the CPU goes through a cycle of fetching the data and instructions from memory, decoding those instructions, and then executing the instructions.

[00:18:32.02]
Ned: To do that, inside of the CPU, it has a few additional components. It's got an accumulator that just holds what the result of the current instruction is or the most recently run instruction. It has an instruction counter that goes up every time it executes an instruction, so it knows to go on the next cycle and fetch the next instruction on the stack. Then it also has a ALU or arithmetic/logic unit. I see the instruction counter and the accumulator are part of a control unit. Then the other one is the ALU, which performs the logical or mathematical operations based on the instructions that are decoded. What's crazy about this is this fundamental architecture is virtually unchanged in your modern computer. You still have a CPU inside your laptop, phone, and crockpot. And inside of that CPU is going to be an instruction counter, an ALU, some registers for storing state, and connections to a bus. And that bus connects to things like I/O devices and memory. It's virtually unchanged since he came up with this in 1944.

[00:19:52.05]
Chris: Like I said, one of the smartest people who ever lived. But yeah, you're right. In terms of classical computing, it's called classical for a reason. What we have is generations of small gains, incremental change, not massive change. Things got faster, things got more reliable, things got more expensive. Someone invented a GPU, quantum. Oh, shit. Everybody drink.

[00:20:23.20]
Ned: I didn't do it. That was all you. When you think about memory, We're not talking necessarily about RAM either. Memory could mean a lot of things. It could mean the S-RAM that's inside your CPU, the D-RAM that's on the bus. It could be NVME drives, it could be SSDs, spinning rust, tape drives, whatever.

[00:20:45.04]
Chris: What it means is a place to hold information until further needed. That's it. Yep. Everything else is a categorization.

[00:20:52.03]
Ned: Exactly. What it means is that good abstractions, like the von Neumann app, architecture are extremely resilient Everything else is just applied computational theory. Now, I've used the term instructions a few times, and that implies that the CPU has some group of instructions that it knows how to perform. It does. Those are referred to as the instruction set.

[00:21:18.24]
Chris: Great name.

[00:21:20.08]
Ned: Just nailing it. How are those instructions defined and implemented inside a CPU? This actually gets down to the architecture of the CPU. When we talk about an X86 versus an arm versus a Mac uses an arm process or Risc-V, what we're talking about is that instruction set that is baked into the architecture of the CPU that defines what the CPU knows how to do natively when it reads one of those instructions in.

[00:21:53.13]
Chris: Surely at some point during this, we're going to talk about what the name risk means and why. Yeah, we are. Neat.

[00:21:59.25]
Ned: I How did you know? I'd like to tell you there will be no math in this section, but I would be lying. I'm going to try to keep things very simple, especially since we don't have the benefit of diagrams and animations to walk you through how things operate. If you are watching the video on YouTube, I will put up some diagrams during this section, and probably in the previous sections when we're talking about the von Neumann architecture. But as Chris has mentioned in previous computing to be unnamed episodes.

[00:22:35.15]
Chris: Drink by inference. Damn.

[00:22:38.08]
Ned: Classical computing relies on binary numbers. Things are on or off, one or zero, high or low. All of the logic and math a computer does is grounded in the world of binary numbers. There's two key concepts behind all of this. Logic operations and transistors. We'll start with logic gates and then describe how they're built with transistors or really any type of relay. Logic operations. If you've ever built a formula in Excel, you've created a filter for a Google search, or you've done some very basic programming, you've encountered these logical expressions, and, or, and, not. Those are the three fundamental expressions. And is a that is true if all the assertions are true. Like, Chris is smelly and it's raining.

[00:23:37.05]
Chris: It's not rain. Wait.

[00:23:39.09]
Ned: If Chris turns out to smell like lavender and hugs, which he does, then the statement is false, regardless of whether or not it's raining. If we say true is equivalent to one and false is equivalent to zero, then one and one is one, and anything else, like one and zero, results in zero. That's a basic and. Or only requires that one assertion be true. If it isn't raining, but Chris stinks like gym lockers in August, then the statement Chris is smelly or it's raining is true. Expressed in binary, zero or zero is false, and anything else like zero and one, I'm sorry, zero or one, or one or one or one is true. The last one is not, and not is just an inversion. I'm going to give you a million dollars, Chris. Not.

[00:24:42.07]
Chris: #35yearoldreferences.

[00:24:43.11]
Ned: Killing it. That means I am not, in fact, going to give you a million dollars. In binary, the operation not one is zero, and not zero is one. Cool?

[00:25:00.10]
Chris: Cool-ish. Okay. I mean, the important thing to note is that these are... In a binary operation, what you have to define is what is the environment you're looking for? Is it that environment satisfies the requirements based on this operation, yes or no. The and, all inclusive. Everything has to be true. So your bundle, even if there's 10 things in the statement, they all have to be true. With an or, any of those statements can be Then you are going to get a result of true. You have any infinite amount of inputs, a filter that is variously rigorous in how it defines a new true answer. Then at the end, you have one answer. Ten inputs, one answer. Which Boolean operation you use defines how you get that answer and why you've gotten that answer, which is the important part, which is, I'm sure, what you're going to get to next.

[00:25:56.13]
Ned: Yes. Now, typically, most Most logic gates in a CPU have one or two inputs. They don't typically have more than that. The number of inputs is defined by what type of logical operation you're trying to perform. There are some other logical operations like a Nand, which is the inverse of an and, a nor, or an exclusive or. The thing is you can construct any of those using the basic three operations of and, or, and not, which means If we want to build a computer with logic and arithmetic operations, we need to create a physical manifestation of each operation or gate type. That's where transistors come into play. Transistors are basically a relay with no moving parts. What's a relay? We can think of a relay as a mechanical switch that can sit in an open state or a closed state. When it's open, if it's an electrical relay, when it's open, no current will flow through the relay. When it's closed, current can flow through. The state of a relay is handled by some type of control mechanism. That can be a human flipping a switch. Your light switch, that's a relay. It's one that you control with your hand or your shoulder or whatever.

[00:27:21.11]
Ned: Or it could be that there is a coil that contracts when power is applied to it, which opens the relay, and then when power is removed, removed from the control, the relay closes again, or the opposite, depending on what you want that relay's default state to be. The earliest computers used mechanical relays and vacuum tubes, both of which had either moving parts or rapidly shifting electric fields, and those parts wore out all the time. They were also limited in how quickly they could switch on or off. Think about how quickly a mechanical switch can go back and forth. You could maybe do it tens of times a second, but you'd increase the failure rate, and that thing would get hot real quick. Vacuum tubes were faster, but they would also wear out. The ENIAC used 17,000 vacuum tubes, and one burned out about every two days, which is not as bad as I thought it was.

[00:28:22.27]
Chris: No, that almost sounds not terrible.

[00:28:26.11]
Ned: Unfortunately, it took at least 15 minutes to find the tube that had burned out.

[00:28:32.16]
Chris: Then 12 seconds to burn your hand on the tube, a solid 90 seconds of swearing in classic 1940s mid-Atlantic accent.

[00:28:43.29]
Ned: Yes.

[00:28:45.01]
Chris: Then a martini, and then you replace the tube.

[00:28:49.01]
Ned: Yeah, standard operating procedure. Then came along transistors. Transistors are super cool because they They have no moving parts. They use a semiconductor to change the conductivity by applying a current to the control layer. Transistors can switch their state as often as you can cycle the current on and off to that control pin. You have to keep the equipment from melting because switching it back and forth does generate heat. But the key here is there's no moving parts. We're changing the physical properties of a layer, but we're not actually moving the layer in any way, which means that transistors last way, way longer than vacuum tubes, and they can also switch much, much faster. For instance, the CPU in your phone has a clock speed of 2 gigahertz, which means that the transistors can change state about 2 billion times a second. That's pretty fast.

[00:30:03.02]
Chris: That's faster than the 10 times a second we talked about earlier.

[00:30:07.17]
Ned: Just slightly. That's transistors. They're awesome. How do you build a logic gate with a transister. Well, here, again, a diagram would be very helpful. I will put one up if you're watching the video, but I'll try to describe a very simple one. The first thing to know is that a transister has three wires. It has the input, the output, and then the control pin. Current can only flow through the input and out through the output. You can't run current the other way, up through the output and out through the input. That would be ridiculous. The flow of that current is controlled by the control wire. When voltage is applied to the control wire, then the current can flow through the transister. When it's removed, the current can't flow. If you want to make a knot gate, you attach the wire to the input side, and that input side is always receiving current. As long as the transister sits open, you receive current on your wire. When that transister is closed by applying a current to the control wire, now the current flows through the transister to the output, and you stop receiving current on your wire.

[00:31:27.03]
Ned: If we consider that the control wire in our logic gate is our input and our wire attached to the transister is our output, if there's an input of zero, we get an output of one. If we apply current to the control which would be a one as input, then we would get a zero or no current as output. Bam, we just made a not gate.

[00:31:54.12]
Chris: It actually sounds crazy, but if you see the graphic, it makes a lot more sense.

[00:32:00.01]
Ned: We're trying to do the inverse, right? That's all the not gate is trying to do. An and gate, remember, is that you need to have two inputs both be positive or on in order to get a positive output or a one. What you do is you wire two transistors in series, and the output of the first is connected to the input of the second, and you have the two inputs being the two control wires of the transistor. If both those control wires are on, then current will flow through the whole thing and you get a one as an output. If either of those control wires are not powered, meaning you have a zero on either of your inputs, then no current flows through, and we just build an and gate. All right, last one is the or gate. You probably won't be surprised to learn that the or gate also has two transistors because we need two inputs. But instead of wiring them in series, they're wired in parallel. What that means is if current flows through either transister, then you get a one at the bottom. If either of the control wires are powered up, the current flows through, and you get a one at the bottom, and we built an Orgate.

[00:33:22.08]
Ned: That's it. Those are your three fundamental gates. There's other ways to do it, but those are the simplest ways. Using those fundamental building blocks, every other type of logic gate can be constructed using a combination of these three gates. From those gates, you can create common operations like addition, subtraction, and multiplication. Now, Chris, I was going to put a whole thing in here about how to create an adder using logic gates, but it really needs diagrams. I was already pushing it anyway. If you want any chance of understanding how to create something that adds binary digits, you need a diagram. It involves creating a half adder using an exclusive org and an and gate, and then chaining it with other half adders and then an org gate, but I've already said too much.

[00:34:15.04]
Chris: I need you to stop. Yeah, I need you to stop. Sorry.

[00:34:18.19]
Ned: Let's put it all together. We've got our theoretical Turing machine. We've got our conceptual architecture from von Neumann. Now we have a way of constructing using logical operations using physical media, using the actual transistors. What we have now is an applied version of a Turing machine using the von Neumann architecture and discrete math for logic and arithmetic operations. Now, a CPU, a modern CPU, has all kinds of specialized transistor groupings to perform various operations, and it has an instruction set that is specific to that processor and the processor family. Intel and AMD use the x86-based instruction set. They actually use an extended instruction set, especially once they added 64 bit to that. Arm has their own reduced instruction set, and so does Risc-V. You mentioned reduced instruction set earlier, Chris, and this is about as deep as I'm going to get into it. But the point is that Intel, when they were designing the X86 chip added a whole bunch of interesting instructions, probably more than was strictly necessary. It's like having too many primitives when you can always build that instruction from other instructions. Apparently, there was a joke from one of the early Intel engineers was that the only way to become a fellow at Intel was to get an instruction added to the instruction set, but there was no award removing one.

[00:36:01.10]
Ned: So what do you think happened? Risk architecture, the reduced instruction set, was an attempt to cull that bloated instruction set down to something a little more manageable. That's what you find in Arm and Risk 5 processors.

[00:36:21.02]
Chris: It's always the trade-off between the two families. The X86 instruction set has, let's just pick a number, a billion instructions.

[00:36:30.01]
Ned: It's got a lot.

[00:36:32.05]
Chris: I feel like I might be untershooting it. But that covers a lot of use cases, even incredibly niche and edge cases. You can do more with one instruction set, which is true. Risk, on the other hand, cuts that number down to 1.5 million, which is way less, but it's more streamlined. There's less in the CPU. It can do the processing of those 1.5 million instructions faster. Then you, as the programmer, have to know the difference between these two things, or at least the compiled does. Now, because if you write a program for X86, it won't run on a risk. Not in its binary form. But the way that the two processors handle it is completely different. The instruction set from X86 tries to handle everything in one go, whereas in Risc, it might take five goes, but if it goes five times faster, then it What's the difference?

[00:37:32.29]
Ned: The CPU architecture can be simpler, which makes them easier to make, but also easier to debug.

[00:37:41.27]
Chris: It also makes them cheaper.

[00:37:44.04]
Ned: Yes. All good things. It makes them more power-efficient because the architecture can be simpler. There's trade-offs. I did want to get into machine code and assembly language and their relationship with higher programming languages, but There's always next time. In there, we can talk maybe more about instruction sets and what's included in those actual operations.

[00:38:10.02]
Chris: If people really want to understand this stuff, there's one or two other things that you should look up. The first one is there are dozens of demonstrations of Babbage's difference engine. The plan still exists. People have recreated the difference engine. If you want to see this work in a mechanical fashion and understand how to put things into a machine like this to do basic mathematics, sometimes seeing it in the larger physical world that even is crazy slow and has to be handled by a hand crank really just helps internalize what's going on with transistors and logic gates and all that stuff, because everything that we talked about is applicable back then. It's just an electronic microscopic implementation.

[00:38:54.29]
Ned: I would say the other thing to look at is the little man computer videos. Just search Little Man computer. There's a hundred of them. It's a mental model for how computers go through their fetch, decode, and execute cycle and store and retrieve information, but at a very simplified level. Well, hey, thanks for listening or something. I guess you found it worthwhile enough if you made it all the way to the end. So congratulations to you, friend. You accomplished something today. Now you can go sit on the couch, work off all the alcohol that you just drank, and take it off for the rest of the day. It's Friday. You've earned it. You can find more about the show by visiting our LinkedIn page, just search Chaos Lever, or go to our website, chaoslever. Com, where you'll find show notes, blog posts, and general Tom Foulery. We'll be back next week to see what fresh hell is upon us. Ta-ta for now. I wrote something in assembly this week. It was terrible. Because I started watching lectures from MIT, and I just fell down a rabbit hole.