card image

Whether you are new to computers or have been around them for years, understanding their architecture can greatly enhance our appreciation of how they function. Everything around us has its own architecture: humans, vehicles, nature, and even the buildings we admire. Computers are no different, and their architecture defines how they operate. Just like in the show How It’s Made, when we learn about how something works, we develop a greater appreciation and a deeper connection to its functionality.

In this article, I will introduce you to two foundational computer architectures: the Von Neumann Architecture and the Harvard Architecture.

Von Neumann Architecture: A Brief Overview

John Von Neumann, a mathematician, physicist, computer scientist, and engineer, along with other luminaries like Alan Turing and Claude Shannon, was one of the original pioneers behind the stored-program digital computer. In 1945, he also introduced the Von Neumann Architecture, which includes a processor with a control unit, an ALU (Arithmetic Logic Unit), registers, memory, and input/output systems.

To help you grasp this, I’ve come up with a fun acronym: Friends Don’t Let Enthusiasm Spiral. Let’s break down this process:

  1. Fetch: The first step involves the ALU retrieving an instruction from memory via three buses. The address bus sends the memory address to locate the instruction, while the control bus provides signals (like a read clock), and the data bus carries the instructions to the processor.

  2. Decode: This step converts the instruction into control signals, which direct components such as the ALU, Control Unit, and I/O devices. The ALU is essentially the brain of the computer, managing data flow and telling components what to do.

  3. Load: Here, operands or data are loaded into the system. This is optional and depends on the instruction. The data is sent over the data bus, and importantly, both instructions and data are stored in the same memory space and accessed via the same data bus.

  4. Execute: The control unit activates the ALU to execute the instruction. The ALU performs arithmetic or logic operations (like addition, subtraction, AND, OR, etc.) if the instruction calls for it.

  5. Store: The final step is storing the result back into memory. This is also optional, as not all instructions require it. The address and control signals are sent through the buses, and the data is stored in memory.

Because accessing main memory is time-consuming, the CPU uses onboard registers for quick access. Some of these registers are available to programmers for temporary data storage, while others assist in the instruction cycle.

Important Registers:
  • Program Counter (PC): Holds the address of the next instruction.

  • Instruction Register (IR): Holds the instruction that is currently being decoded or executed.

However, the Von Neumann Architecture faces a limitation known as the "Von Neumann Bottleneck." This happens because both data and instructions must pass through the same bus, leading to slowdowns. Even with the fastest CPU, much of its time is spent waiting for instructions to arrive.

The Harvard Architecture: A Solution to the Bottleneck

The Harvard Architecture solves this problem by implementing two separate buses—one for instructions and another for data. Not only do these buses run concurrently, but they also connect to separate memory spaces. This means that data and instructions can be accessed at the same time, reducing delays. Moreover, since each memory space is separate, the instruction and data memories don't need to be the same size.

By having separate pathways for data and instructions, Harvard Architecture provides greater speed and efficiency, sidestepping

 

Impact of Harvard Architecture on Modern Computing

Harvard Architecture’s ability to handle instructions and data concurrently has far-reaching impacts on modern computing, especially in systems requiring high performance and reliability. With its distinct separation of memory spaces, it enables more efficient management of resources, particularly in environments where performance optimization is key.

For instance, in embedded systems such as those used in IoT devices, automotive control systems, and even certain gaming consoles, the Harvard Architecture provides a clear advantage. These systems often need to execute a combination of continuous data processing (e.g., sensor data) and instruction-based tasks (e.g., control logic). By having dedicated pathways for instructions and data, they can manage these tasks without delays, enhancing both system performance and real-time responsiveness.

Moreover, the Harvard Architecture is also gaining attention in the field of machine learning and artificial intelligence. As these fields require vast amounts of data to be processed quickly and accurately, systems based on Harvard Architecture are increasingly favored for specific applications like neural network accelerators, where rapid data throughput is essential for training models efficiently.

In conclusion, while Von Neumann architecture laid the groundwork for modern computing, the Harvard architecture’s innovations have paved the way for advancements in specialized computing, particularly in high-performance and real-time applications. As technology continues to evolve, we can expect the Harvard Architecture to remain a cornerstone in systems requiring maximum efficiency and speed.