For this article, I’m assuming the reader is familiar with the basics of digital logic and computer memory (D Flip Flop, Registers). If not, encourage you to read about these topics first for a better understanding of this article.
When we write code, we give little thought to the inner workings of how our machine is able to take that human-readable text & convert into instructions that actually make the magic happen. For low-level programming languages such as C, the compiler translates the code in Assembly which is further converted into machine code. Machine code is a string of binary numbers that our beloved machines are excellent at working with (although a 32- or 64-bit string of numbers is not so appealing for humans to work with).
Machine code is actually instructions that are stored in memory. When you run a program, software, or an app, the processor loads those instructions and starts executing them. So far, I have worked with the MIPS processor during my time at college where I have written assembly code that’s converted to machine code with the help of an assembler. After this is done, the processor executes through those instructions once at a time and runs my program. In machines, billions of instructions are read executed every second. The speed of your computer depends on the clock in the processor among other things. When you hear that a processor is 1 GHz, it essentially means it has a clock inside it ticking at that frequency.
That’s basically a short & simple explanation of how the code we write is actually executed. Of course, there’s much more to computer architecture and engineering. I have more ideas about posts on computer architecture which I’ll publish soon so subscribe for an update when that happens. Feel free to checkout other posts on the blog.