We have mentioned earlier that there are already standard approaches defined for representing colors and images digitally. For grayscale images, one common approach is to use 8 bits per pixel, allowing for 256 shades of gray. Each pixel’s value typically represents the intensity of light, so 0 represents no light intensity (black) and 255 represents full intensity (white), and values in between are varying shades of gray, from dark to light. Although grayscale can be represented with a single 8-bit number, an approach known as RGB uses three 8-bit numbers to represent the intensity of Red, Green, and Blue that combine to make a single color. This means that 24 bits are needed to represent the overall color.
For example, the color red is represented in RGB with all 8 red bits set to 1, and the remaining 16 bits for the other two colors set to 0, or for yellow, which is a combination of red and green, but no blue, we could set the red and green bits to all 1s and leave the blue bits as all 0s.     Further on, each component color can vary from 00000000 (0 decimal/0 hex) to 11111111 (255 decimal/FF hex). A lower value represents a darker shade of that color, and a higher value represents a brighter shade of that color. With this flexibility of mixing colors, we can represent nearly any shade imaginable.
Other than colors, there are also multiple, commonly used approaches for representing an entire image; a simplistic approach of representing an image is called a bitmap. Bitmap images store the RGB color data for each individual pixel, as we saw earlier. Other image formats, such as JPEG and PNG, use compression techniques to reduce the number of bytes required to store an image, as compared to a bitmap.

Single binary value has different meanings in various contexts, so for example, for binary value: 011000010110001001100011 we have various interpretations - a text editor program will assume the data is text, whereas an image viewer may assume it is the color of a pixel in an image, and a calculator may assume it is a number; each program is written to expect data in a particular format. We have seen examples of numbers, text, colors and images in a digital format, such is for audio and video; so in general we can represent anything as a sequence of 0s and 1s, and with a device that works with binary data we can adapt it, through software, to deal with any kind of data.

So, we have explained how we can represent data, but computers do more than simply store data. They allow us to work with data as well. With a computer’s help, we can read, edit, create, transform, share, and otherwise manipulate data. Computers give us the capability to process data in many ways using hardware that we can program to execute a sequence of simple instructions. Computer processors that implement these instructions are fundamentally based on binary logic, a system for describing logical statements where variables can only be one of two values — true (1) or false (0).

When we ask a computer to do something for us, even just pressing a single key on the keyboard, it follows a set of instructions. The software used carries out a fetch, decode, execute and store cycle. Originally proposed by mathematician, physicist, computer scientist and engineer John von Neumann, this is the basic operation (instructions set) cycle of a computer, that is continuous action the CPU carries out to run the computer even if nothing appears to be happening. Each instruction is stored as a set of binary values in the memory, which is located as close to the CPU as possible.

Computer architecture

Central Processing Unit (CPU) is the brain of a computer system. It performs all major calculations and comparisons, and also activates and controls the operations of other units of the computer system. Hence, no other single component of a computer determines its overall performance as much as its CPU. Control Unit (CU) and Arithmetic Logic Unit (ALU) are the two basic components of a CPU.
The control unit of a CPU selects and interprets program instructions and then coordinates their execution. It has some special purpose registers and a decoder to perform these activities.
When the control unit encounters an instruction that involves an arithmetic operation (such as add, subtract, multiply, divide) or a logic operation (such as less than, equal to, greater than) it passes control to the ALU. ALU also has some special purpose registers and necessary circuitry to carry out all arithmetic and logic operations included in the set of instructions supported by the CPU.
When entire CPU is contained on a single tiny silicon chip, it is called a microprocessor.

CPU Instruction Set

Every CPU has built-in ability to execute a set of machine instructions called its instruction set. Most CPUs have 200 or more instructions (such as add, subtract, and compare) in their instruction set. The list of instructions supported by a CPU in its instruction set forms the basis for designing the machine language for the CPU. Since each processor has a unique instruction set, machine language programs written for one computer will not run on another computer with a different CPU.
CPUs made by different manufacturers have different instruction sets. In fact, different CPU models of the same manufacturer may have different instruction sets. However, manufacturers tend to group their CPUs into “families” having similar instruction sets, so when developing a new CPU, they ensure that its instruction set includes all instructions in the instruction set of its predecessor CPU plus some new ones. This manufacturing strategy is known as backward compatibility. In this way, software written for a computer with a particular CPU can work on computers with newer processors of the same family, which allows users of these computer systems to upgrade their systems with no worry.

Programming Languages

The main difference between a natural language and a computer language is that natural languages have a large vocabulary, but most computer languages use a limited or restricted vocabulary. This is because a programming language by its very nature and purpose does not need to say much. Every problem to be solved by a computer has to be broken into discrete (simple and separate) logical steps comprising of four fundamental operations - input and output operations, arithmetic operations, operations for data movement within CPU and memory, and logical or comparison operations.
All computer languages are broadly classified into the following three categories:
1. Machine Language
2. Assembly Language
3. High-level Language

Machine Language

Only machine language is a language that computer understands without using a translation program. Normally, the machine language instructions of a computer are written as strings of binary 1s and 0s, and the circuitry of the computer converts them into electrical signals needed to execute them.
A machine language instruction normally has a two-part format. The first part is operation code that tells the computer what operation to perform, and the second part is operand that tells where to find or store the data on which the computer has to perform the operation.   Obviously, this language is not easy to use because it uses a number system (binary), which is not simple to write the code in. This set of instructions, whether in binary or decimal, which a computer can understand without the help of a translating program, is called machine code or machine language program. A computer executes programs written in machine language at great speed because it can understand machine instructions without the need for any translation, but there are several disadvantages, also, like: it is machine dependent, it is difficult to program, it is error prone, it is difficult to correct and modify.

Assembly Language

A language that allows use of letters and symbols instead of numbers for representing instructions and storage locations is called assembly language or symbolic language. A program written in an assembly language is called assembly language program or symbolic program.
Assembly language programming, introduced in 1952, helped in overcoming limitations of machine language programming in the following manner:

1. By using alphanumeric mnemonic codes instead of numeric codes for the instructions in the instruction set, such as ADD instead of 1110, SUB instead of 1111, and so on.
2. By allowing use of alphanumeric names instead of numeric addresses for representing addresses of fixed storage locations, such as FRST, SCND, and ANSR instead of 1000, 1001, and 1002.
3. By providing additional instructions, called pseudo-instructions, in the instruction set for instructing the system how we want it to assemble the program in computer's memory, for example:

START PROGRAM AT 0000
SET ASIDE AN ADDRESS FOR FRST

Example 2:

Assembler

Since a computer can directly execute only machine language programs, we must convert an assembly language program into its equivalent machine language program before it can be executed on the computer. A translator program called assembler does this translation. It is so called because in addition to translating an assembly language program into its equivalent machine language program, it also “assembles” the machine language program in main memory of the computer, and makes it ready for execution.

Due to the additional translation process involved, the computer has to spend more time in getting the desired result from an assembly language program as compared to a machine language program; however, assembly language programming saves so much time and effort of a programmer that the extra time and effort spent by the computer is worth it - translation is a one-time process that requires relatively small time. Once the system obtains the object code, it can save it and subsequently execute it as many times as required.

Programmers rarely write programs of any significant size in assembly language today. They write assembly language code only when a program's execution efficiency is important. Assembly language programming helps in producing compact, fast, and efficient code because it gives the programmer total control of the computer's CPU, memory, and registers. Hence, programmers use assembly language programming mainly to fine-tune important parts of programs written in a higher-level language.

« Previous Next »