1.1.2 Type of processor

There are two main approaches to the architecture of processor:

RISC Processors CISC Processors
All instructions executed within a single clock cycle Includes complex instructions which take many clock cycles
Smaller instruction set means less transistors are needed, reducing required power and reducing costs to build Larger instruction set means more transistors are needed, increasing required power and increasing costs to build
As instructions usually take a single clock cycle, pipelining can be used to increase processor efficiency As instructions may take multiple clock cycles, pipelining can't be used to increase processor efficiency
More memory required to store programs Less memory required to store programs
Harder for programmers to write programs for (in assembly language), as more instructions need to be written and its assembly language isn't very similar to a high level language Easier for programmers to write programs for (in assembly language), as fewer instructions need to be written and its assembly language is closer to a high level language
Compiler needs to do more work to translate high level code into machine code (as code needs to be broken down into very simple instructions) Compiler needs to do less work to translate high level code into machine code (as code can be broken down into more complex instructions)

Graphics Processing Units (GPUs) are specialised electronic circuits, which are very efficient in manipulating computer graphics and image-processing. GPUs:

These features mean that it is more efficient at displaying graphics than a CPU, which is a general purpose processor. The GPU is a form of co-processor, and may be used by the CPU to compute sections of a program, while the rest of the sequential code (which the GPU can't run) is run by the CPU. resulting in improved performance for the user.

Due to their specialist capabilities, GPUs also have many uses outside of graphics processing, such as modelling physical systems, audio processing, cracking passwords and machine learning among many others.

A parallel system is a system where multiple independent processors are used simultaneously to work on the same program, which significantly improves the performance. The process is divided into smaller subtasks, which can then be processed by any of the processors. This is coordinated by the operating system. Parallel processing can be achieved through different methods - through using multiple processors in a computer or by distributing tasks across multiple cores in a CPU or GPU (such as SIMD processing).

Advantages of Parallel Processing Disadvantages of Parallel Processing
Faster than a single processor when handling high volumes of data which require the same processing Not all programs can use parallel processing, like when one part of the program requires another to have been processed
Usually allow programs to be executed more quickly May cost more as more processing blocks required
Allows for multiple programs to be used simultaneously Harder to write software to use parallel processing
A more complex OS is required to manage processors
Higher energy consumption