Computer organization and architecture

The term Computer architecture refers to a set of rules stating how computer software and hardware are combined together and how they interact to make a computer functional - furthermore, the computer architecture also specifies which technologies the computer is able to handle. Computer architecture is a specification, which describes how software and hardware interact together to produce a functioning platform.

Historically there have been 2 types of Computers:
1. Fixed Program Computers – Their function is very specific and they couldn’t be reprogrammed, e.g. Calculators.
2. Stored Program Computers – These can be programmed to carry out many different tasks, applications are stored on them, hence the name.

Modern computers are based on a stored-program concept introduced by John Von Neumann.
In 1945, John von Neumann, who was a mathematician at the time, had delved into the study that a computer could have a fixed simple structure and still be able to execute any kind of computation without hardware modification. This is providing that the computer is properly programmed with proper instructions, in which it is able to execute them.   The basic structure is also known as ISA (Instruction set architecture) computer and is having three basic units:
1. The Central Processing Unit (CPU)
2. The Main Memory Unit
3. The Input/Output Device

The first computers were different from the computers that followed and significantly different from the current modern ones. Because most civilizations used the decimal system, the first computers were designed to work in the same way. Only at later stages was it understood that for electrical devices such as computers, a binary system is more suitable and much more efficient, due to the understanding that computers, like any other electrical circuit, recognize two basic conditions: on and off. This understanding led to the current modern architecture implementations in which computers are based on the binary system.
A second significant difference relates to size. The rapid technological advancements in electronics and minimization did not exist in the early days of computing, and for that reason the computers were very large, consumed huge amounts of electricity, and were limited in the scope of solutions they could provide. A significant step forward was achieved with the invention of the transistor (1947). The transistor is a semiconductor device that can amplify and switch signals. The transistors replaced the vacuum tubes and the electromechanical switches, which were significantly larger, consumed large amounts of electricity, and were insufficiently reliable.


Historically, computers were simply calculating machines. As computers became more sophisticated, they became general-purpose machines, which necessitated viewing each system as a hierarchy of levels instead of one gigantic machine. Each layer in this hierarchy serves a specific purpose, and all levels help minimize the semantic gap between a high-level programming language or application and the gates and wires that make up the physical hardware. Perhaps the single most important development in computing is the introduction of the stored-program concept of the von Neumann machine. Although there are other architectural models, the von Neumann architecture is predominant in today’s general-purpose computers.

Prior to the 1500s, a typical European businessperson used an abacus for calculations and recorded the result of his ciphering in Roman numerals. After the decimal numbering system finally replaced Roman numerals, a number of people invented devices to make decimal calculations even faster and more accurate.

The wired world that we know today was born from the invention of a single electronic device called a vacuum tube by Americans and — more accurately — a valve by the British. Vacuum tubes should be called valves because they control the flow of electrons in electrical systems in much the same way as valves control the flow of water in a plumbing system. In fact, some mid-twentieth- century breeds of these electron tubes contain no vacuum at all, but are filled with conductive gases, such as mercury vapor, which can provide desirable electrical behavior.

The second generation came up with the invention of transistors - transistors consume less power than vacuum tubes, are smaller, and work more reliably. The transistor (transfer resistor) is the solid-state version of a switch that can be opened or closed by electrical signals. Whereas mechanical switches have moving parts, transistors do not. This allows them to be very small, resulting in the transistor being ubiquitous in electronics today.
A transistor actually performs two different functions: it can behave either as a switch or as an amplifier.
Loudspeakers and hearing aids are good examples of using transistors as amplifiers — a small electric current at one end becomes a much larger current at the other end. Although transistors are used in many devices for amplification, when discussing computers, the ability of a transistor to work as a switch is much more relevant. Low current can flow through a transistor and switch on larger current through another transistor.
Transistors are made from silicon. In its pure form, silicon is not a good conductor of electricity. But when it is combined with trace amounts of neighboring elements from the periodic table, silicon conducts electricity in an effective and easily controlled manner. If we add a small amount of aluminum to silicon, the silicon ends up with a slight imbalance in its outer electron shell, and therefore attracts electrons from any pole that has a negative potential (an excess of electrons). When doped in this way, silicon or germanium becomes a P-type material. Similarly, if we add a little boron, arsenic, or gallium to silicon, we’ll have extra electrons in valences of the silicon crystals. This gives us an N-type material.
A transistor essentially works when electrons flow between the two different types of silicon; we can layer them in ways that allow us to make various components. For example, if we join N-type silicon to P-type silicon with contacts on one side, electrons can flow from the N-side to the P-side, but not the other way, producing one-way current. We can create N-P-N layers, or P-N-P layers, and so on, each able to either amplify or switch current.
Transistors come in thousands of different types, from low to high power, low to high frequency, and low to very high current. Although we can still buy transistors one at a time, most are integrated into circuits by combining billions of transistors into one chip. They are critical components in every modern circuit; they are used in making the various digital logic gates that are essential for computers, cell phones, tablets, washing machines, and similar examples of modern technology.

The real expansion in computer use came with the integrated circuit generation. Early ICs allowed dozens of transistors to exist on a single silicon chip that was smaller than a single “discrete component” transistor. Computers became faster, smaller, and cheaper, bringing huge gains in processing power.

In the third generation of electronic evolution, multiple transistors were integrated onto one chip. As manufacturing techniques and chip technologies advanced, increasing numbers of transistors were packed onto one chip. There are now various levels of integration: SSI (small-scale integration), in which there are 10 to 100 components per chip; MSI (medium-scale integration), in which there are 100 to 1000 components per chip; LSI (large-scale integration), in which there are 1000 to 10,000 components per chip; and finally, VLSI (very-large-scale integration), in which there are more than 10,000 components per chip. This last level, VLSI, marks the beginning of the fourth generation of computers. The complexity of integrated circuits continues to grow, with more transistors being added all the time. The term ULSI (ultra-large-scale integration) has been suggested for integrated circuits containing more than 1 million transistors. Other useful terminology includes: (1) WSI (wafer-scale integration, building superchip ICs from an entire silicon wafer; (2) 3D-IC (three-dimensional integrated circuit); and (3) SOC (system-on-a-chip), an IC that includes all the necessary components for the entire computer.

The most basic unit of information in a digital computer is called a bit, a binary digit. A bit is nothing more than a state of “on” or “off ” (or “high” and “low”) within a computer circuit. In 1964, the designers of the IBM System/360 mainframe computer established a convention of using groups of 8 bits as the basic unit of addressable computer storage. They called this collection of 8 bits a byte. Computer words consist of two or more adjacent bytes that are sometimes addressed and almost always are manipulated collectively. The word size represents the data size that is handled most efficiently by a particular architecture. Words can be 16 bits, 32 bits, 64 bits, or any other size that makes sense in the context of a computer’s organization (including sizes that are not multiples of eight). An 8-bit byte can be divided into two 4-bit halves called nibbles (or nybbles). Because each bit of a byte has a value within a positional numbering system, the nibble containing the least-valued binary digit is called the low-order nibble, and the other half the high-order nibble.

Boolean algebra

Boolean algebra is an algebra for the manipulation of objects that can take on only two values, typically true and false, although it can be any pair of values. Because computers are built as collections of switches that are either “on” or “off”, Boolean algebra is a natural way to represent digital information. In reality, digital circuits use low and high voltages, but here we will deal with 0 and 1 that will suffice. It is common to interpret the digital value 0 as false and the digital value 1 as true. In addition to binary objects, Boolean algebra also has operations that can be performed on these objects, or variables. Combining the variables and operators yields Boolean expressions. Three common Boolean operators are AND, OR, and NOT. These we have already introduced, among others; so here we are reminding of the concept earlier explained... We have also introduced the Truth Table in which the behavior of specific operator is presented.

DeMorgan’s Law is very useful from a programming perspective. It gives us a convenient way to convert a positive statement into a negative one and vice versa. This is very common when dealing with conditionals and loops in programs.

Logic gates

The logical operators AND, OR, and NOT that we have discussed have been represented in an abstract sense using truth tables and Boolean expressions. The actual physical components, or digital circuits, such as those that perform arithmetic operations or make choices in a computer, are constructed from a number of primitive elements called gates. Gates implement each of the basic logic functions we have discussed. These gates are the basic building blocks for digital design. Formally, a gate is a small, electronic device that computes various functions of two-valued signals. More simply stated, a gate implements a simple Boolean function. To physically implement each gate requires from one to six or more transistors, depending on the technology being used. Accordingly, the basic physical component of a computer is the transistor; the basic logic element is the gate.
We have seen that logic gates drive almost everything a computer does. A standard gate takes time to propagate inputs and generate outputs and is limited by the speed of electrical currents. Research has been ongoing to find faster methods, including the use of light and lightwave electronics. Although we probably won’t see laser logic gates any time soon, the potential for these logic gates is enormous. They have the potential to be used in computers operating at the petahertz level with 1 quadrillion operations per second—that would be a million times faster than our current technology.

Karnaugh maps, or Kmaps, are a graphical way to represent Boolean functions. A map is simply a table used to enumerate the values of a given Boolean expression for different input values. The rows and columns correspond to the possible values of the function’s inputs. Each cell represents the outputs of the function for those possible inputs. We will not go further with these maps here...

Every computer is built using collections of gates that are all connected by way of wires acting as signal pathways. These collections of gates are often quite standard, resulting in a set of building blocks that can be used to build the entire computer system. These building blocks are all constructed using the basic AND, OR, and NOT operations.

Boolean algebra allows us to analyze and design digital circuits. Because of the relationship between Boolean algebra and logic diagrams, we simplify our circuit by simplifying our Boolean expression. Digital circuits are implemented with gates, but gates and logic diagrams are not the most convenient forms for representing digital circuits during the design phase. Boolean expressions are much better to use during this phase because they are easier to manipulate and simplify. The complexity of the expression representing a Boolean function has a direct effect on the complexity of the resulting digital circuit: the more complex the expression, the more complex the resulting circuit.

Integrated circuits

Computers are composed of various digital components, connected by wires. Like a good program, the actual hardware of a computer uses collections of gates to create larger modules, which, in turn, are used to implement various functions. The number of gates required to create these “building blocks” depends on the technology being used.
Typically, gates are not sold individually; they are sold in units called integrated circuits (ICs). A chip (a silicon semiconductor crystal) is a small electronic device consisting of the necessary electronic components (transistors, resistors, and capacitors) to implement various gates. Components are etched directly on the chip, allowing them to be smaller and to require less power for operation than their discrete component counterparts. This chip is then mounted in a ceramic or plastic container with external pins. The necessary connections are welded from the chip to the external pins to form an IC. The first ICs contained very few transistors, called SSI chips and contained up to 100 electronic components per chip. We now have ultra-large-scale integration (ULSI) with more than 1 million electronic components per chip.
Digital logic chips are combined to produce useful circuits. These logic circuits can be categorized as either combinational logic or sequential logic.

« Previous Next »