Monday, August 10, 2009

Computer monitor


A monitor or display (sometimes called a visual display unit) is a piece of electrical equipment which displays images generated by devices such as computers, without producing a permanent record. The monitor comprises the display device, circuitry, and an enclosure. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD), while older monitors use a cathode ray tube (CRT).

Wednesday, March 18, 2009

Scanner

Scanner may refer to a number of technological devices:

• Scanner (radio), for searching for and receiving radio broadcasts
• A rotating radar antenna
• Image scanner, which digitizes a two-dimensional image
• 3D scanner, which digitizes the three-dimensional shape of a real object
• Barcode reader, which reads the data encoded in a barcode
• Vulnerability scanner, a computer program that probes for weaknesses
• Lexical analyzer, a computer program
• Stepper, a part of the photolithography process
• An outside broadcast control vehicle
• An automotive electronic control unit diagnostic tool
• An automated spotlight
• Port scanner, in computer networking
• A biometric scanner, an electronic device with a sensor to read patterns or
images from faces, irises and finger pads to create a biological template.

In popular culture, scanner may refer to:

• Scanner (Band), a German speed metal band
• Scanner (Code Lyoko), a fictional teleportation device
• The stage name of British electronic musician Robin Rimbaud
• A floating enemy in the computer game Half Life 2
• Scanners Live in Vain, a science fiction short story by Cordwainer Smith
• A Scanner Darkly, a science fiction novel by Philip K. Dick.
• Scanners, a 1981 science fiction horror film

Keyboard

In computing, a keyboard is an input device, partially modeled after the typewriter keyboard, which uses an arrangement of buttons or keys, which act as mechanical levers or electronic switches. A keyboard typically has characters engraved or printed on the keys and each press of a key typically corresponds to a single written symbol. However, to produce some symbols requires pressing and holding several keys simultaneously or in sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys or simultaneous key presses can produce actions or computer commands.

In normal usage, the keyboard is used to type text and numbers into a word processor, text editor or other program. In a modern computer, the interpretation of keypresses is generally left to the software. A computer keyboard distinguishes each physical key from every other and reports all keypresses to the controlling software. Keyboards are also used for computer gaming, either with regular keyboards or by using special gaming keyboards, which can expedite frequently used keystroke combinations. A keyboard is also used to give commands to the operating system of a computer, such as Windows' Control-Alt-Delete combination, which brings up a task window or shuts down the machine.

DVD-R

DVD-R is a DVD recordable format. A DVD-R typically has a storage capacity of 4.71 GB (or 4.38 GiB), although the capacity of the original standard developed by Pioneer was 3.95 GB (3.68 GiB). Both values are significantly larger than the storage capacity of its optical predecessor, the 700 MB CD-R – a DVD-R has 6.4 times the capacity of a CD-R. Pioneer has also developed an 8.54 GB dual layer version, DVD-R DL, which appeared on the market in 2005.

Data on a DVD-R cannot be changed, whereas a DVD-RW (DVD-rewritable) can be rewritten multiple (1000+) times. DVD-R(W) is one of three competing industry standard DVD recordable formats; the others are DVD+R(W) and DVD-RAM.

Floppy Disk

A floppy disk is a data storage medium that is composed of a disk of thin, flexible ("floppy") magnetic storage medium encased in a square or rectangular plastic shell. Floppy disks are read and written by a floppy disk drive or FDD, the initials of which should not be confused with "fixed disk drive," which is another term for a (nonremovable type of) hard disk drive. Invented by IBM, floppy disks in 8-inch (200 mm), 5¼-inch (133⅓ mm), and the newest and most common 3½-inch (90 mm) formats enjoyed many years as a popular and ubiquitous form of data storage and exchange, from the mid-1970s to the late 1990s. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, they have now been largely superseded by USB flash drives, External Hard Drives, CD-ROMs and DVD-ROMs.

DVD-RW

A DVD-RW disc is a rewritable optical disc with equal storage capacity to a DVD-R, typically 4.7 GB. The format was developed by Pioneer in November 1999 and has been approved by the DVD Forum. Unlike DVD-RAM, it is playable in about 75% of conventional DVD players.[citation needed] The smaller Mini DVD-RW holds 1.46 GB, with a diameter of 8 cm.

The primary advantage of DVD-RW over DVD-R is the ability to erase and rewrite to a DVD-RW disc. According to Pioneer, DVD-RW discs may be written to about 1,000 times before needing replacement, making them comparable with the CD-RW standard. DVD-RW discs are commonly used for volatile data, such as backups or collections of files. They are also increasingly used for home DVD video recorders. One benefit to using a rewritable disc is if there are writing errors when recording data, the disc is not ruined and can still store data by erasing the faulty data.

One competing rewritable format is DVD+RW. Hybrid drives that can handle both, often labeled "DVD±RW", are very popular due to the lack of a single standard for recordable DVDs.

The recording layer in DVD-RW and DVD+RW is not an organic dye, but a special phase change metal alloy, often GeSbTe. The alloy can be switched back and forth between a crystalline phase and an amorphous phase, changing the reflectivity, depending on the power of the laser beam. Data can thus be written, erased and re-written.

There is now a new format called DVD-RW2. Older DVD burners are not all forward compatible with this new standard.

The current fastest speed a DVD-RW disc can be written to is 6x speed, with many at this speed having DVD-RW2 capabilities.

DVD-ROM

DVD, also known as "Digital Versatile Disc" or "Digital Video Disc," is an optical disc storage media format. Its main uses are video and data storage. Most DVDs are of the same dimensions as compact discs (CDs) but store more than six times as much data.

Variations of the term DVD often describe the way data is stored on the discs: DVD-ROM (Read Only Memory), has data that can only be read and not written, DVD-R and DVD+R can record data only once and then function as a DVD-ROM. DVD-RW, DVD+RW and DVD-RAM can both record and erase data multiple times. The wavelength used by standard DVD lasers is 650 nm,[1] and thus the light has a red color.

DVD-Video and DVD-Audio discs respectively refer to properly formatted and structured video and audio content. Other types of DVDs, including those with video content, may be referred to as DVD-Data discs. As next generation high-definition optical formats also use a disc identical in some aspects yet more advanced than a DVD, such as Blu-ray Disc, the original DVD is occasionally given the retronym SD DVD (for standard definition).

CD-RW

Compact Disc ReWritable (CD-RW) is a rewritable optical disc format. Known as CD-Erasable (CD-E) during its development, CD-RW was introduced in 1997, and was preceded by the never officially released CD-MO in 1988.

Applications and limitations

CD-RW discs never gained the widespread popularity of CD-R, partly due to their higher per-unit price, lower recording and reading speeds, and compatibility issues with CD reading units, as well as between CD-RW formats of different speeds specifications.

Also, compared to other forms of rewritable media such as Zip drives, Iomega Jaz drives, Magneto-optical and flash memory based media, the CD-RW format uses the standard CD-ROM and CD-R file systems and storage strategies, which are inherently unsuitable for repeated small-scale file additions and deletions, thus making the use of CD-RW as a true removable disk impractical.

CD-RW also have a shorter rewriting cycles life (ca. 1,000) compared to virtually all of the previously exposed types storage of media (typically well above 10,000 or even 100,000), something which however is less of a drawback considering that CD-RWs are usually written and erased in their totality, and not with repeated small scale changes, so normally wear leveling is not an issue.

Their ideal usage field is in the creation of test disks, temporary short or mid-term backups, and in general, where an intermediate solution between online and offline storage schemes is required.

CD-ROM

CD-ROM (an initialism of "Compact Disc Read-Only Memory") is a pre-pressed Compact Disc that contains data accessible to, but not writable by, a computer. While the Compact Disc format was originally designed for music storage and playback, the 1985 “Yellow Book” standard developed by Sony and Philips adapted the format to hold any form of binary data.

CD-ROMs are popularly used to distribute computer software, including games and multimedia applications, though any data can be stored (up to the capacity limit of a disc). Some CDs hold both computer data and audio with the latter capable of being played on a CD player, while data (such as software or digital video) is only usable on a computer (such as PC CD-ROMs). These are called enhanced CDs.

Although many people use lowercase letters in this acronym, proper presentation is in all capital letters with a hyphen between CD and ROM. It was also suggested by some, especially soon after the technology was first released, that CD-ROM was an acronym for "Compact Disc read-only-media", or that it was a more "correct" definition. This was not the intention of the original team who developed the CD-ROM, and common acceptance of the "memory" definition is now almost universal. This is probably in no small part due to the widespread use of other "ROM" acronyms such as Flash-ROMs and EEPROMs where "memory" is usually the correct term.

CD-Compact Disc

A Compact Disc (also known as a CD) is an optical disc used to store digital data, originally developed for storing digital audio. The CD, available on the market since October 1982, remains the standard physical medium for sale of commercial audio recordings to the present day.

Standard CDs have a diameter of 120 mm and can hold up to 80 minutes of audio (700 MB of data). The Mini CD has various diameters ranging from 60 to 80 mm; they are sometimes used for CD singles or device drivers, storing up to 24 minutes of audio.

The technology was later adapted and expanded to include data storage CD-ROM, write-once audio and data storage CD-R, rewritable media CD-RW, Super Audio CD (SACD), Video Compact Discs (VCD), Super Video Compact Discs (SVCD), PhotoCD, PictureCD, CD-i, and Enhanced CD. CD-ROMs and CD-Rs remain widely used technologies in the computer industry. The CD and its extensions have been extremely successful: in 2004, worldwide sales of CD audio, CD-ROM, and CD-R reached about 30 billion discs. By 2007, 200 billion CDs had been sold worldwide.

Wednesday, January 21, 2009

Printer (computing)


In computing, a printer is a peripheral which produces a hard copy (permanent human-readable text and/or graphics) of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most newer printers, a USB cable to a computer which serves as a document source. Some printers, commonly known as network printers, have built-in network interfaces (typically wireless or Ethernet), and can serve as a hardcopy device for any user on the network. Individual printers are often designed to support both local and network connected users at the same time.

In addition, a few modern printers can directly interface to electronic media such as memory sticks or memory cards, or to image capture devices such as digital cameras, scanners; some printers are combined with a scanners and/or fax machines in a single unit, and can function as photocopiers. Printers that include non-printing features are sometimes called Multifunction Printers (MFP), Multi-Function Devices (MFD), or All-In-One (AIO) printers. Most MFPs include printing, scanning, and copying among their features.

Printers are designed for low-volume, short-turnaround print jobs; requiring virtually no setup time to achieve a hard copy of a given document. However, printers are generally slow devices (30 pages per minute is considered fast; and many consumer printers are far slower than that), and the cost-per-page is relatively high. The printing press remains the machine of choice for high-volume, professional publishing. However, as printers have improved in quality and performance, many jobs which used to be done by professional print shops are now done by users on local printers; see desktop publishing. The world's first computer printer was a 19th century mechanically driven apparatus invented by Charles Babbage for his Difference Engine.

Intel® Pentium® D Processor

The Pentium D brand refers to two series of dual-core 64-bit x86 processors with the NetBurst microarchitecture manufactured by Intel. Each CPU comprised two dies, each containing a single core residing next to each other on a multi-chip module package. The brand's first processor, codenamed Smithfield, was released by Intel on May 25, 2005. Nine months later, Intel introduced its successor, codenamed Presler, but without offering significant upgrades in design, still resulting in a relatively high power consumption. By 2005, the NetBurst processors reached a clock speed barrier at 4 GHz due to a thermal (and power) limit exemplified by the Presler's 130 W Thermal Design Power (a higher TDP requires additional cooling that can be prohibitively noisy or expensive). The future belonged to more efficient and slower clocked dual-core CPUs on a single die instead of two. The final shipment date of the dual die Presler chips was August 8, 2008, which marked the end of the Pentium D brand and also the NetBurst microarchitecture.

Pentium D
Central Processing Unit


Produced .......................From 2005 to 2008
Common manufacturer(s)..........Intel
Max CPU clock ..................2.66 GHz to 3.73 GHz
FSB speeds......................533 MT/s to 1066 MT/s
Min feature size................0.09 µm to 0.065 µm
Instruction set.................MMX, SSE, SSE2, SSE3, x86-64
Microarchitecture...............NetBurst
Cores...........................2 (2x1)
Socket(s).......................LGA 775
Core name(s)....................Smithfield, Presler

Computer Power Supply

A modern computer power supply is a switched-mode supply designed to convert 110-240 V AC power from the mains supply, to several output both positive (and historically negative) DC voltages in the range 12V to 3.3V. The first computer power supplies were linear devices, but as cost became a driving factor, and weight became important, switched mode supplies are almost universal.

The diverse collection of output voltages also have widely varying current draw requirements, which are difficult to all be supplied from the same switched-mode source. Consequently most modern computer power supplies actually consist of several different switched mode supplies, each producing just one voltage component and each able to vary its output based on component power requirements, and all are linked together to shut down as a group in the event of a fault condition.

The most common modern computer power supplies are built to conform to the ATX form factor. The power rating of a PC power supply is not officially certified and is self-claimed by each manufacturer.A common way to reach the power figure for PC PSUs is by adding the power available on each rail, which will not give a true power figure. The more reputable makers advertise "True Wattage Rated" to give consumers the idea that they can trust the power advertised.

Central Processing Unit (CPU)

A central processing unit (CPU) is a machine that can execute computer programs. This broad definition can easily be applied to many early computers that existed long before the term "CPU" ever came into widespread usage. The term itself and its initialism have been in use in the computer industry at least since the early 1960s (Weik 1961). The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation has remained much the same.

Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are suited for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones to children's toys.

Types of RAM

Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them.

As both SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as "permanent" storage in traditional computers. Many newer products instead rely on flash memory to maintain data between sessions of use: examples include PDAs, small music players, mobile phones, synthesizers, advanced calculators, industrial instrumentation and robotics, and many other types of products; even certain categories of personal computers, such as the OLPC XO-1, Asus Eee PC, and others, have begun replacing magnetic disk with so called flash drives (similar to fast memory cards equipped with an IDE or SATA interface).

There are two basic types of flash memory: the NOR type, which is capable of true random access, and the NAND type, which is not; the former is therefore often used in place of ROM, while the latter is used in most memory cards and solid-state drives, due to a lower price.

RAM (Random-Access Memory)


Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today it takes the form of integrated circuits that allow the stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1]

This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item.

The word RAM is mostly associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. However, many other types of memory are RAM as well (i.e., Random Access Memory), including most types of ROM and a kind of flash memory called NOR-Flash.

MOTHERBOARD

Most computer motherboards produced today are designed for IBM-compatible computers, which currently account for around 90% of global PC sales[citation needed]. A motherboard, like a backplane, provides the electrical connections by which the other components of the system communicate, but unlike a backplane, it also hosts the central processing unit, and other subsystems and devices.

Motherboards are also used in many other electronics devices.

A typical desktop computer has its microprocessor, main memory, and other essential components on the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables, although in modern computers it is increasingly common to integrate some of these peripherals into the motherboard itself.

An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard.

Modern motherboards include, at a minimum:

. sockets (or slots) in which one or more microprocessors are installed[3]
. slots into which the system's main memory is installed (typically in the form of DIMM modules containing DRAM chips)
. a chipset which forms an interface between the CPU's front-side bus, main memory, and peripheral buses

. non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS
. a clock generator which produces the system clock signal to synchronize the various components
. slots for expansion cards (these interface to the system via the buses supported by the chipset)
. power connectors flickers, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards.[4]

Additionally, nearly all motherboards include logic and connectors to support commonly-used input devices, such as PS/2 connectors for a mouse and keyboard. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example on the Apple II, and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards.

Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heatsinks and mounting points for fans to dissipate excess heat.

Computer Set


A complex instruction set computer (CISC, pronounced like "sisk") is a computer instruction set architecture (ISA) in which each instruction can execute several low-level operations, such as a load from memory, an arithmetic operation, and a memory store, all in a single instruction. The term was retroactively coined in contrast to reduced instruction set computer (RISC)

Electronic Numerical Integrator and Computer (ENIAC)

The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic general-purpose computer. It combined, for the first time, the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. (Colossus couldn't add.) It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes.) Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, and contained over 18,000 valves. One of the major engineering feats was to minimize valve burnout, which was a common problem at that time. The machine was in almost constant use for the next ten years.

ENIAC was unambiguously a Turing-complete device. It could compute any problem (that would fit in memory.) A "program" on the ENIAC, however, was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that evolved from it. Once a program was written, it had to be mechanically set into the machine. Six women did most of the programming of ENIAC. (Improvements completed in 1948 made it possible to execute stored programs set in function table memory, which made programming less a "one-off" effort, and more systematic.)

American Developments

In 1937, Shannon produced his master's thesis at MIT that implemented Boolean algebra using electronic relays and switches for the first time in history. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design. George Stibitz completed a relay-based computer he dubbed the "Model K" at Bell Labs in November 1937. Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Calculator, completed January 8, 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on September 11, 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely, in this case over a phone line. Some participants in the conference who witnessed the demonstration were John von Neumann, John Mauchly, and Norbert Wiener, who wrote about it in their memoirs.

In 1939, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC), a special purpose digital electronic calculator for solving systems of linear equations. The design used over 300 vacuum tubes and employed capacitors fixed in a mechanically rotating drum for memory. Though the ABC machine was not programmable, it was the first to use electronic tubes in an adder. ENIAC co-inventor John Mauchly examined the ABC in June 1941, and its influence on the design of the later ENIAC machine is a matter of contention among computer historians. The ABC was largely forgotten until it became the focus of the lawsuit Honeywell v. Sperry Rand, the ruling of which invalidated the ENIAC patent (and several others) as, among many reasons, having been anticipated by Atanasoff's work.

In 1939, development began at IBM's Endicott laboratories on the Harvard Mark I. Known officially as the Automatic Sequence Controlled Calculator, the Mark I was a general purpose electro-mechanical computer built with IBM financing and with assistance from IBM personnel, under the direction of Harvard mathematician Howard Aiken. Its design was influenced by Babbage's Analytical Engine, using decimal arithmetic and storage wheels and rotary switches in addition to electromagnetic relays. It was programmable via punched paper tape, and contained several calculation units working in parallel. Later versions contained several paper tape readers and the machine could switch between readers based on a condition. Nevertheless, the machine was not quite Turing-complete. The Mark I was moved to Harvard University and began operation in May 1944.

Colossus Computer

During World War II, the British at Bletchley Park (40 miles north of London) achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was attacked with the help of electro-mechanical machines called bombes. The bombe, designed by Alan Turing and Gordon Welchman, after the Polish cryptographic bomba by Marian Rejewski (1938) came into use in 1941. They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand.

The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, termed "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Professor Max Newman and his colleagues helped specify the Colossus. The Mk I Colossus was built between March and December 1943 by Tommy Flowers and his colleagues at the Post Office Research Station at Dollis Hill in London and then shipped to Bletchley Park in January 1944.

Colossus was the first totally electronic computing device. The Colossus used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand. Due to this secrecy the Colossi were not included in many histories of computing. A reconstructed copy of one of the Colossus machines is now on display at Bletchley Park.

Z-series Calculators

Working in isolation in Germany, Konrad Zuse started construction in 1936 of his first Z-series calculators featuring memory and (initially limited) programmability. Zuse's purely mechanical, but already binary Z1, finished in 1938, never worked reliably due to problems with the precision of parts.

Zuse's later machine, the Z3, was finished in 1941. It was based on telephone relays and did work satisfactorily. The Z3 thus became the first functional program-controlled, all-purpose, digital computer. In many ways it was quite similar to modern machines, pioneering numerous advances, such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time.

Programs were fed into Z3 on punched films. Conditional jumps were missing, but since the 1990s it has been proved theoretically that Z3 was still a universal computer (ignoring its physical storage size limitations). In two 1936 patent applications, Konrad Zuse also anticipated that machine instructions could be stored in the same storage used for data – the key insight of what became known as the von Neumann architecture and was first implemented in the later British EDSAC design (1949). Zuse also claimed to have designed the first higher-level programming language, (Plankalkül), in 1945 (which was published in 1948) although it was implemented for the first time in 2000 by a team around Raúl Rojas at the Free University of Berlin – five years after Zuse died.

Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents.

Digital Computation

The era of modern computing began with a flurry of development before and during World War II, as electronic circuit elements[54] replaced mechanical equivalents and digital calculations replaced analog calculations. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium.

In this era, a number of different machines were produced with steadily advancing capabilities. At the beginning of this period, nothing remotely resembling a modern computer existed, except in the long-lost plans of Charles Babbage and the mathematical musings of Alan Turing and others. At the end of the era, devices like the EDSAC had been built, and are universally agreed to be digital computers. Defining a single point in the series as the "first computer" misses many subtleties (see the table "Defining characteristics of some early digital computers of the 1940s" below).

Alan Turing's 1936 paper proved enormously influential in computing and computer science in two ways. Its main purpose was to prove that there were problems (namely the halting problem) that could not be solved by any sequential process. In doing so, Turing provided a definition of a universal computer which executes a program stored on tape. This construct came to be called a Turing machine; it replaces Kurt Gödel's more cumbersome universal language based on arithmetics. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

For a computing machine to be a practical general-purpose computer, there must be some convenient read-write mechanism, punched tape, for example. With a knowledge of Alan Turing's theoretical 'universal computing machine' John von Neumann defined an architecture which uses the same memory both to store programs and data: virtually all contemporary computers use this architecture (or some variant). While it is theoretically possible to implement a full computer entirely mechanically (as Babbage's design showed), electronics made possible the speed and later the miniaturization that characterize modern computers.

There were three parallel streams of computer development in the World War II era; the first stream largely ignored, and the second stream deliberately kept secret. The first was the German work of Konrad Zuse. The second was the secret development of the Colossus computers in the UK. Neither of these had much influence on the various computing projects in the United States. The third stream of computer development, Eckert and Mauchly's ENIAC and EDVAC, was widely publicized.

Advanced Analog Computers

Before World War II, mechanical and electrical analog computers were considered the "state of the art", and many thought they were the future of computing. Analog computers take advantage of the strong similarities between the mathematics of small-scale properties — the position and motion of wheels or the voltage and current of electronic components — and the mathematics of other physical phenomena, for example, ballistic trajectories, inertia, resonance, energy transfer, momentum, and so forth. They model physical phenomena with electrical voltages and currents as the analog quantities.


Centrally, these analog systems work by creating electrical analogs of other systems, allowing users to predict behavior of the systems of interest by observing the electrical analogs. The most useful of the analogies was the way the small-scale behavior could be represented with integral and differential equations, and could be thus used to solve those equations. An ingenious example of such a machine, using water as the analog quantity, was the water integrator built in 1928; an electrical example is the Mallock machine built in 1941. A planimeter is a device which does integrals, using distance as the analog quantity. Until the 1980s, HVAC systems used air both as the analog quantity and the controlling element. Unlike modern digital computers, analog computers are not very flexible, and need to be reconfigured (i.e., reprogrammed) manually to switch them from working on one problem to another. Analog computers had an advantage over early digital computers in that they could be used to solve complex problems using behavioral analogues while the earliest attempts at digital computers were quite limited.


A Smith Chart is a well-known nomogram.Since computers were rare in this era, the solutions were often hard-coded into paper forms such as graphs and nomograms, which could then produce analog solutions to these problems, such as the distribution of pressures and temperatures in a heating system. Some of the most widely deployed analog computers included devices for aiming weapons, such as the Norden bombsight and the fire-control systems, such as Arthur Pollen's Argo system for naval vessels. Some stayed in use for decades after WWII; the Mark I Fire Control Computer was deployed by the United States Navy on a variety of ships from destroyers to battleships. Other analog computers included the Heathkit EC-1, and the hydraulic MONIAC Computer which modeled econometric flows.

The art of analog computing reached its zenith with the differential analyzer, invented in 1876 by James Thomson and built by H. W. Nieman and Vannevar Bush at MIT starting in 1927. Fewer than a dozen of these devices were ever built; the most powerful was constructed at the University of Pennsylvania's Moore School of Electrical Engineering, where the ENIAC was built. Digital electronic computers like the ENIAC spelled the end for most analog computing machines, but hybrid analog computers, controlled by digital electronics, remained in substantial use into the 1950s and 1960s, and later in some specialized applications. But like all digital devices, the decimal precision of a digital device is a limitation, as compared to an analog device, in which the accuracy is a limitation. As electronics progressed during the twentieth century, its problems of operation at low voltages while maintaining high signal-to-noise ratios were steadily addressed, as shown below, for a digital circuit is a specialized form of analog circuit, intended to operate at standardized settings (continuing in the same vein, logic gates can be realized as forms of digital circuits). But as digital computers have become faster and use larger memory (for example, RAM or internal storage), they have almost entirely displaced analog computers. Computer programming, or coding, has arisen as another human profession.

Desktop Calculators

By the 1900s, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. The word "computer" was a job title assigned to people who used these calculators to perform mathematical calculations. By the 1920s Lewis Fry Richardson's interest in weather prediction led him to propose human computers and numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier-Stokes equations.

Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. During the Manhattan project, future Nobel laureate Richard Feynman was the supervisor of the roomful of human computers, many of them women mathematicians, who understood the differential equations which were being solved for the war effort. Even the renowned Stanisław Ulam was pressed into service to translate the mathematics into computable approximations for the hydrogen bomb, after the war.



In 1948, the Curta was introduced. This was a small, portable, mechanical calculator that was about the size of a pepper grinder. Over time, during the 1950s and 1960s a variety of different brands of mechanical calculator appeared on the market. The first all-electronic desktop calculator was the British ANITA Mk.VII, which used a Nixie tube display and 177 subminiature thyratron tubes. In June 1963, Friden introduced the four-function EC-130. It had an all-transistor design, 13-digit capacity on a 5-inch (130 mm) CRT, and introduced reverse Polish notation (RPN) to the calculator market at a price of $2200. The model EC-132 added square root and reciprocal functions. In 1965, Wang Laboratories produced the LOCI-2, a 10-digit transistorized desktop calculator that used a Nixie tube display and could compute logarithms.

Earliest Calculators

Devices have been used to aid computation for thousands of years, using one-to-one correspondence with our fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included clay shapes, which represented counts of items, probably livestock or grains, sealed in containers. The abacus was used for arithmetic tasks. The Roman abacus was used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money (this is the origin of "Exchequer" as a term for a nation's treasury).

A number of analog computers were constructed in ancient and medieval times to perform astronomical calculations. These include the Antikythera mechanism and the astrolabe from ancient Greece (c. 150–100 BC), which are generally regarded as the first mechanical analog computers.Other early versions of mechanical devices used to perform some type of calculations include the planisphere and other mechanical computing devices invented by Abū Rayhān al-Bīrūnī (c. AD 1000); the equatorium and universal latitude-independent astrolabe by Abū Ishāq Ibrāhīm al-Zarqālī (c. AD 1015); the astronomical analog computers of other medieval Muslim astronomers and engineers; and the astronomical clock tower of Su Song (c. AD 1090) during the Song Dynasty.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who play music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed every day in order to account for the changing lengths of day and night throughout the year.

Scottish mathematician and physicist John Napier noted multiplication and division of numbers could be performed by addition and subtraction, respectively, of logarithms of those numbers. While producing the first logarithmic tables Napier needed to perform many multiplications, and it was at this point that he designed Napier's bones, an abacus-like device used for multiplication and division. Since real numbers can be represented as distances or intervals on a line, the slide rule was invented in the 1620s to allow multiplication and division operations to be carried out significantly faster than was previously possible. Slide rules were used by generations of engineers and other mathematically inclined professional workers, until the invention of the pocket calculator. The engineers in the Apollo program to send a man to the moon made many of their calculations on slide rules, which were accurate to three or four significant figures.

German polymath Wilhelm Schickard built the first digital mechanical calculator in 1623, and thus became the father of the computing era. Since his calculator used techniques such as cogs and gears first developed for clocks, it was also called a 'calculating clock'. It was put to practical use by his friend Johannes Kepler, who revolutionized astronomy when he condensed decades of astronomical observations into algebraic expressions. An original calculator by Pascal (1640) is preserved in the Zwinger Museum. Machines by Blaise Pascal (the Pascaline, 1642) and Gottfried Wilhelm von Leibniz (1671) followed. Leibniz once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used."

Around 1820, Charles Xavier Thomas created the first successful, mass-produced mechanical calculator, the Thomas Arithmometer, that could add, subtract, multiply, and divide. It was mainly based on Leibniz' work. Mechanical calculators, like the base-ten addiator, the comptometer, the Monroe, the Curta and the Addo-X remained in use until the 1970s. Leibniz also described the binary numeral system, a central ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles Babbage's machines of the 1800s and even ENIAC of 1945) were based on the decimal system; ENIAC's ring counters emulated the operation of the digit wheels of a mechanical adding machine.

In Japan, Ryoichi Yazu patented a mechanical calculator called the Yazu Arithmometer in 1902. It consisted of a single cylinder and 22 gears, and employed the mixed base-2 and base-5 number system familiar to users to the soroban (Japanese abacus). Carry and end of calculation were determined automatically. More than 200 units were sold, mainly to government agencies such as the Ministry of War and agricultural experiment stations. Yazu invested the profits in a factory to build what would have been Japan's first propeller-driven airplane, but that project was abandoned after his untimely death at the age of 31.

History of Computing Hardware

The history of computer hardware encompasses the hardware, its architecture, and its impact on software. The elements of computing hardware have undergone significant improvement over their history. This improvement has triggered worldwide use of the technology, performance has improved and the price has declined. Computers are accessible to ever-increasing sectors of the world's population. Computing hardware has become a platform for uses other than computation, such as automation, communication, control, entertainment, and education. Each field in turn has imposed its own requirements on the hardware, which has evolved in response to those requirements.

The von Neumann architecture unifies our current computing hardware implementations. Since digital computers rely on digital storage, and tend to be limited by the size and speed of memory, the history of computer data storage is tied to the development of computers. The major elements of computing hardware implement abstractions: input, output, memory, and processor. A processor is composed of control and datapath. In the von Neumann architecture, control of the datapath is stored in memory. This allowed control to become an automatic process; the datapath could be under software control, perhaps in response to events. Beginning with mechanical datapaths such as the abacus and astrolabe, the hardware first started using analogs for a computation, including water and even air as the analog quantities: analog computers have used lengths, pressures, voltages, and currents to represent the results of calculations. Eventually the voltages or currents were standardized, and then digitized. Digital computing elements have ranged from mechanical gears, to electromechanical relays, to vacuum tubes, to transistors, and to integrated circuits, all of which are currently implementing the von Neumann architecture.