Wednesday, February 18, 2009

HARDDISK


For many years, hard disk drives were large, cumbersome devices, more suited to use in the protected environment of a data center or large office than in a harsh industrial environment (due to their delicacy), or small office or home (due to their size and power consumption). Before the early 1980s, most hard disk drives had 8-inch (actually, 210 - 195 mm) or 14-inch platters, required an equipment rack or a large amount of floor space (especially the large removable-media drives, which were frequently comparable in size to washing machines), and in many cases needed high-current and/or three-phase power hookups due to the large motors they used. Because of this, hard disk drives were not commonly used with microcomputers until after 1980, when Seagate Technology introduced the ST-506, the first 5.25-inch hard drives, with a formatted capacity of 5 megabytes.
The capacity of hard drives has grown exponentially over time. With early personal computers, a drive with a 20 megabyte capacity was considered large. During the mid to late 1990s, when PCs were capable of storing not just text files and documents but pictures, music, and video, internal drives were made with 8 to 20 GB capacities. As of mid 2008, desktop hard disk drives typically have a capacity of 500 to 750 gigabytes, while the largest-capacity drives are 2 terabytes.


HDDs record data by magnetizing ferromagnetic material directionally, to represent either a 0 or a 1 binary digit. They read the data back by detecting the magnetization of the material. A typical HDD design consists of a spindle which holds one or more flat circular disks called platters, onto which the data are recorded. The platters are made from a non-magnetic material, usually aluminum alloy or glass, and are coated with a thin layer of magnetic material. Older disks used iron(III) oxide as the magnetic material, but current disks use a cobalt-based alloy.
A hard disk drive (HDD), commonly referred to as a hard drive, hard disk, or fixed disk drive,[1] is a non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, "drive" refers to a device distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.[2]
HDDs (introduced in 1956 as data storage for an IBM accounting computer[3]) were originally developed for use with general purpose computers. During the 1990s, the need for large-scale, reliable storage, independent of a particular device, led to the introduction of embedded systems such as RAID arrays, network attached storage (NAS) systems and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data. In the 21st century, HDD usage expanded into consumer applications such as camcorders, cellphones, digital audio players, digital video players (e.g. the iPod Classic), digital video recorders, personal digital assistants and video game consoles.


The platters are spun at very high speeds. Information is written to a platter as it rotates past devices called read-and-write heads that operate very close (tens of nanometers in new drives) over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. There is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or (in older designs) a stepper motor. Stepper motors were outside the head-disk chamber, and preceded voice-coil drives. The latter, for a while, had a structure similar to that of a loudspeaker; the coil and heads moved in a straight line, along a radius of the platters. The present-day structure differs in several respects from that of the earlier voice-coil drives, but the same interaction between the coil and magnetic field still applies, and the term is still used.
Older drives read the data on the platter by sensing the rate of change of the magnetism in the head; these heads had small coils, and worked (in principle) much like magnetic-tape playback heads, although not in contact with the recording surface. As data density increased, read heads using magnetoresistance (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in these heads, the magnetoresistive effect was much greater that in earlier types, and was dubbed "giant" magnetoresistance (GMR). This refers to the degree of effect, not the physical size, of the head — the heads themselves are extremely tiny, and are too small to be seen without a microscope. GMR read heads are now commonplace.
HD heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at, or close to, the platter speed. The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. It's a type of air bearing.

The magnetic surface of each platter is conceptually divided into many small sub-micrometre-sized magnetic regions, each of which is used to encode a single binary unit of information. In today's HDDs, each of these magnetic regions is composed of a few hundred magnetic grains. Each magnetic region forms a magnetic dipole which generates a highly localized magnetic field nearby. The write head magnetizes a region by generating a strong local magnetic field. Early HDDs used an electromagnet both to generate this field and to read the data by using electromagnetic induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. In today's heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers, separated by a 3-atom-thick layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005, as of 2007 the technology was used in many HDDs
The motor has an external rotor; the stator windings are copper-colored. The spindle bearing is in the center. To the left of center is the actuator with a read-write head under the tip of its very end (near center); the orange stripe along the side of the arm, a thin printed-circuit cable, connects the read-write head to the hub of the actuator. The flexible, somewhat 'U'-shaped, ribbon cable barely visible below and to the left of the actuator arm is the flexible section, one end on the hub, that continues the connection from the head to the controller board on the opposite side.
The head support arm is very light, but also rigid; in modern drives, acceleration at the head reaches 250 gs.
The silver-colored structure at the upper left is the top plate of the permanent-magnet and moving coil "motor" that swings the heads to the desired position. Beneath this plate is the moving coil, attached to the actuator hub, and beneath that is a thin neodymium-iron-boron (NIB) high-flux magnet. That magnet is mounted on the bottom plate of the "motor".
The coil, itself, is shaped rather like an arrowhead, and made of doubly-coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it's wound on a form, making it self-supporting. Much of the coil, sides of the arrowhead, which points to the actuator bearing center, interacts with the magnetic field to develop a tangential force to rotate the actuator. Considering that current flows (at a given time) radially outward along one side of the arrowhead, and radially inward on the other, the surface of the magnet is half N pole, half S pole; the dividing line is midway, and radial.
Using rigid disks and sealing the unit allows much tighter tolerances than in a floppy disk drive. Consequently, hard disk drives can store much more data than floppy disk drives and can access and transmit it faster.
• A typical desktop HDD might store between 120 GB and 2 TB of data (based on US market data[10]), rotate at 5,400 to 7,200 rpm and have a media transfer rate of 1 Gbit/s or higher[citation needed]. (1 GB = 109 B; 1 Gbit/s = 109 bit/s)
• As of January 2009, the highest capacity HDDs are 2 TB[11].
• The fastest “enterprise” HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media transfer speeds above 1.6 Gbit/s.[12] and a sustained transfer rate up to 125 MBytes/second.[12] Drives running at 10,000 or 15,000 rpm use smaller platters to mitigate increased power requirements (due to air drag) and therefore generally have lower capacity than the highest capacity desktop drives.
• Mobile, i.e., laptop HDDs, which are physically smaller than their desktop and enterprise counterparts, tend to be slower and have lower capacity. A typical mobile HDD spins at 5,400 rpm, with 7,200 rpm models available for a slight price premium. Because of the smaller disks, mobile HDDs generally have lower capacity than the highest capacity desktop drives.
The exponential increases in disk space and data access speeds of HDDs have enabled the commercial viability of consumer products that require large storage capacities, such as digital video recorders and digital audio players.[13] In addition, the availability of vast amounts of cheap storage has made viable a variety of web-based services with extraordinary capacity requirements, such as free-of-charge web search, web archiving and video sharing (Google, Internet Archive, YouTube, etc.).
The main way to decrease access time is to increase rotational speed, thus reducing rotational delay, while the main way to increase throughput and storage capacity is to increase areal density. Based on historic trends, analysts predict a future growth in HDD bit density (and therefore capacity) of about 40% per year.[14] Access times have not kept up with throughput increases, which themselves have not kept up with growth in storage capacity.
The first 3.5″ HDD marketed as able to store 1 TB was the Hitachi Deskstar 7K1000. It contains five platters at approximately 200 GB each, providing 935.5 GiB of usable space;[15] note the discrepancy between the its capacity in decimal units (1 TB = 1012 bytes) and binary units (1 TiB = 1024 GiB = 240 bytes). Hitachi has since been joined by Samsung (Samsung SpinPoint F1, which has 3 × 334 GB platters), Seagate and Western Digital in the 1 TB drive market.
As of December 2008, a single 3.5" platter is able to hold 500GB worth of data.[18]
Form factor Width Largest capacity Platters (Max)
5.25″ FH
146 mm
47 GB[19] (1998)
14
5.25″ HH
146 mm 19.3 GB[20] (1998)
4[21]

3.5″ 102 mm 2 TB[22] (2009)
4
2.5″ 69.9 mm 500 GB[23] (2008)
3
1.8″ (CE-ATA/ZIF)
54 mm 250 GB[24] (2008)
3
1.3″ 43 mm 40 GB[25] (2007)
1
1″ (CFII/ZIF/IDE-Flex) 42 mm 20 GB (2006) 1
0.85″ 24 mm 8 GB[26] (2004)
1

Capacity of a hard disk drive is usually quoted in gigabytes and terabytes. Older HDDs quoted their smaller capacities in megabytes, some of the first drives for PCs being just 5 or 10 MB.
The capacity of an HDD can be calculated by multiplying the number of cylinders by the number of heads by the number of sectors by the number of bytes/sector (most commonly 512). Drives with the ATA interface and a capacity of eight gigabytes or more behave as if they were structured into 16383 cylinders, 16 heads, and 63 sectors, for compatibility with older operating systems. Unlike in the 1980s, the cylinder, head, sector (C/H/S) counts reported to the CPU by a modern ATA drive are no longer actual physical parameters since the reported numbers are constrained by historic operating-system interfaces and with zone bit recording the actual number of sectors varies by zone. Disks with SCSI interface address each sector with a unique integer number; the operating system remains ignorant of their head or cylinder count.
The old C/H/S scheme has been replaced by logical block addressing. In some cases, to try to "force-fit" the C/H/S scheme to large-capacity drives, the number of heads was given as 64, although no modern drive has anywhere near 32 platters.
Hard disk drive manufacturers specify disk capacity using the SI prefixes mega-, giga- and tera-, and their abbreviations M, G and T. Byte is typically abbreviated B.
Most operating-system tools report capacity using the same abbreviations but actually use binary prefixes. For instance, the prefix mega-, which normally means 106 (1,000,000), in the context of data storage can mean 220 (1,048,576), which is nearly 5% more. Similar usage has been applied to prefixes of greater magnitude. This results in a discrepancy between the disk manufacturer's stated capacity and the apparent capacity of the drive when examined through most operating-system tools. The difference becomes even more noticeable for a gigabyte (7%), and again for a terabyte (9%). For a petabyte there is a 11% difference between the SI (10005) and binary (10245) definitions. For example, Microsoft Windows reports disk capacity both in decimal-based units to 12 or more significant digits and with binary-based units to three significant digits. Thus a disk specified by a disk manufacturer as a 30 GB disk might have its capacity reported by Windows 2000 both as "30,065,098,568 bytes" and "28.0 GB". The disk manufacturer used the SI definition of "giga", 109 to arrive at 30 GB; however, because Microsoft Windows, Mac OS and some Linux distributions use "gigabyte" for 1,073,741,824 bytes (230 bytes), the operating system reports capacity of the disk drive as (only) 28.0 GB.
Data transfer rate: As of 2008, a typical 7200rpm desktop hard drive has a sustained "disk-to-buffer" data transfer rate of about 70 megabytes per second.[32]This rate depends on the track location, so it will be highest for data on the outer tracks (where there are more data sectors) and lower toward the inner tracks (where there are fewer data sectors); and is generally somewhat higher for 10,000rpm drives. A current widely-used standard for the "buffer-to-computer" interface is 3.0 Gbit/s SATA, which can send about 300 megabyte/s. from the buffer to the computer, and thus is still comfortably ahead of today's disk-to-buffer transfer rates.
Seek time currently ranges from just under 2 ms for high-end server drives, to 15 ms for miniature drives, with the most common desktop type typically being around 9 ms.[citation needed] There has not been any significant improvement in this speed for some years. Some early PC drives used a stepper motor to move the heads, and as a result had access times as slow as 80–120 ms, but this was quickly improved by voice-coil type actuation in the late 1980s, reducing access times to around 20 ms.
Power consumption has become increasingly important, not just in mobile devices such as laptops but also in server and desktop markets. Increasing data center machine density has led to problems delivering sufficient power to devices, and getting rid of the waste heat subsequently produced, as well as environmental and electrical cost concerns (see green computing). Similar issues exist for large companies with thousands of desktop PCs. Smaller form factor drives often use less power than larger drives. One interesting development in this area is actively controlling the seek speed so that the head arrives at its destination only just in time to read the sector, rather than arriving as quickly as possible and then having to wait for the sector to come around (i.e. the rotational latency).
Audible noise (measured in dBA) is significant for certain applications, such as PVRs digital audio recording and quiet computers. Low noise disks typically use fluid bearings, slower rotational speeds (usually 5,400 rpm) and reduce the seek speed under load (AAM) to reduce audible clicks and crunching sounds. Drives in smaller form factors (e.g. 2.5 inch) are often quieter than larger drives.
Shock resistance is especially important for mobile devices. Some laptops now include a motion sensor that parks the disk heads if the machine is dropped, hopefully before impact, to offer the greatest possible chance of survival in such an event.
Hard disk drives are accessed over one of a number of bus types, including parallel ATA (P-ATA, also called IDE or EIDE), Serial ATA (SATA), SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Bridge circuitry is sometimes used to connect hard disk drives to buses that they cannot communicate with natively, such as IEEE 1394, USB and SCSI.
Back in the days of the ST-506 interface, the data encoding scheme was also important. The first ST-506 disks used Modified Frequency Modulation (MFM) encoding, and transferred data at a rate of 5 megabits per second. Later on, controllers using 2,7 RLL (or just "RLL") encoding increased the transfer rate by 50%, to 7.5 megabits per second; this also increased disk capacity by fifty percent.
Many ST-506 interface disk drives were only specified by the manufacturer to run at the lower MFM data rate, while other models (usually more expensive versions of the same basic disk drive) were specified to run at the higher RLL data rate. In some cases, a disk drive had sufficient margin to allow the MFM specified model to run at the faster RLL data rate; however, this was often unreliable and was not recommended. (An RLL-certified disk drive could run on a MFM controller, but with 1/3 less data capacity and speed.)
Enhanced Small Disk Interface (ESDI) also supported multiple data rates (ESDI disks always used 2,7 RLL, but at 10, 15 or 20 megabits per second), but this was usually negotiated automatically by the disk drive and controller; most of the time, however, 15 or 20 megabit ESDI disk drives weren't downward compatible (i.e. a 15 or 20 megabit disk drive wouldn't run on a 10 megabit controller). ESDI disk drives typically also had jumpers to set the number of sectors per track and (in some cases) sector size.
Modern hard drives present a consistent interface to the rest of the computer, no matter what data encoding scheme is used internally. Typically a DSP in the electronics inside the hard drive takes the raw analog voltages from the read head and uses PRML and Reed–Solomon error correction [33] to decode the sector boundaries and sector data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self-Monitoring, Analysis, and Reporting Technology, and other internal tasks.
SCSI originally had just one signaling frequency of 5 MHz for a maximum data rate of 5 megabytes/second over 8 parallel conductors, but later this was increased dramatically. The SCSI bus speed had no bearing on the disk's internal speed because of buffering between the SCSI bus and the disk drive's internal data bus; however, many early disk drives had very small buffers, and thus had to be reformatted to a different interleave (just like ST-506 disks) when used on slow computers, such as early Commodore Amiga, IBM PC compatibles and Apple Macintoshes.
Most major hard disk and motherboard vendors now support self-monitoring, analysis and reporting technology (S.M.A.R.T.), which measures drive characteristics such as temperature, spin-up time, data error rates, etc. Certain trends and sudden changes in these parameters are thought to be associated with increased likelihood of drive failure and data loss.
However, not all failures are predictable. Normal use eventually can lead to a breakdown in the inherently fragile device, which makes it essential for the user to periodically back up the data onto a separate storage device. Failure to do so will lead to the loss of data. While it may sometimes be possible to recover lost information, it is normally an extremely costly procedure, and it is not possible to guarantee success. A 2007 study published by Google suggested very little correlation between failure rates and either high temperature or activity level; however, the correlation between manufacturer/model and failure rate was relatively strong. Statistics in this matter is kept highly secret by most entities. Google did not publish the manufacturer's names along with their respective failure rates.[41] While several S.M.A.R.T. parameters have an impact on failure probability, a large fraction of failed drives do not produce predictive S.M.A.R.T. parameters.[41] S.M.A.R.T. parameters alone may not be useful for predicting individual drive failures.[41]
A common misconception is that a colder hard drive will last longer than a hotter hard drive. The Google study seems to imply the reverse -- "lower temperatures are associated with higher failure rates". Hard drives with S.M.A.R.T.-reported average temperatures below 27 °C had failure rates worse than hard drives with the highest reported average temperature of 50 °C, failure rates at least twice as high as the optimum S.M.A.R.T.-reported temperature range of 36 °C to 47 °C.[41]
SCSI, SAS and FC drives are typically more expensive and are traditionally used in servers and disk arrays, whereas inexpensive ATA and SATA drives evolved in the home computer market and were perceived to be less reliable. This distinction is now becoming blurred.
The mean time between failures (MTBF) of SATA drives is usually about 600,000 hours (some drives such as Western Digital Raptor have rated 1.2 million hours MTBF), while SCSI drives are rated for upwards of 1.5 million hours.[citation needed] However, independent research indicates that MTBF is not a reliable estimate of a drive's longevity.[42] MTBF is conducted in laboratory environments in test chambers and is an important metric to determine the quality of a disk drive before it enters high volume production. Once the drive product is in production, the more valid[citation needed] metric is annualized failure rate (AFR). AFR is the percentage of real-world drive failures after shipping.
SAS drives are comparable to SCSI drives, with high MTBF and high[citation needed] reliability.
Enterprise S-ATA drives designed and produced for enterprise markets, unlike standard S-ATA drives, have reliability comparable to other enterprise class drives. [43] [44]
Typically enterprise drives (all enterprise drives, including SCSI, SAS, enterprise SATA and FC) experience between 0.70%-0.78% annual failure rates from the total installed drives.[citation needed]
Eventually all mechanical harddiscs fail. And thus the strategy to mitigate loss of data is to have redundancy in some form, like RAID and backup. RAID should never be relied on as backup, as raid controllers also break down, making the disks inaccessible. Following a backup strategy, like daily differential and weekly full backups, is the only sure way to prevent data loss.

The hard drive's electronics control the movement of the actuator and the rotation of the disk, and perform reads and writes on demand from the disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology), or segments interspersed with real data (in the case of embedded servo technology). The servo feedback optimizes the signal to noise ratio of the GMR sensors by adjusting the voice-coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media which have failed.
ATA disks have typically had no problems with interleave or data rate, due to their controller design, but many early models were incompatible with each other and couldn't run with two devices on the same physical cable in a master/slave setup. This was mostly remedied by the mid-1990s, when ATA's specification was standardised and the details began to be cleaned up, but still causes problems occasionally (especially with CD-ROM and DVD-ROM disks, and when mixing Ultra DMA and non-UDMA devices).
Serial ATA does away with master/slave setups entirely, placing each disk on its own channel (with its own set of I/O ports) instead.
FireWire/IEEE 1394 and USB(1.0/2.0) HDDs are external units containing generally ATA or SCSI disks with ports on the back allowing very simple and effective expansion and mobility. Most FireWire/IEEE 1394 models are able to daisy-chain in order to continue adding peripherals without requiring additional ports on the computer itself.
Disk interface families used in personal computers
Notable families of disk interfaces include:
• Historical bit serial interfaces — connected to a hard disk drive controller with three cables, one for data, one for control and one for power. The HDD controller provided significant functions such as serial to parallel conversion, data separation and track formatting, and required matching to the drive in order to assure reliability.
o ST506 used MFM (Modified Frequency Modulation) for the data encoding method.
o ST412 was available in either MFM or RLL (Run Length Limited) variants.
o Enhanced Small Disk Interface (ESDI) was an interface developed by Maxtor to allow faster communication between the PC and the disk than MFM or RLL.
• Modern bit serial interfaces — connect to a host bus adapter (today typically integrated into the "south bridge") with two cables, one for data/control and one for power.
o Fibre Channel (FC), is a successor to parallel SCSI interface on enterprise market. It is a serial protocol. In disk drives usually the Fibre Channel Arbitrated Loop (FC-AL) connection topology is used. FC has much broader usage than mere disk interfaces, it is the cornerstone of storage area networks (SANs). Recently other protocols for this field, like iSCSI and ATA over Ethernet have been developed as well. Confusingly, drives usually use copper twisted-pair cables for Fibre Channel, not fibre optics. The latter are traditionally reserved for larger devices, such as servers or disk array controllers.
o Serial ATA (SATA). The SATA data cable has one data pair for differential transmission of data to the device, and one pair for differential receiving from the device, just like EIA-422. That requires that data be transmitted serially. Similar differential signaling system is used in RS485, LocalTalk, USB, Firewire, and differential SCSI.
o Serial Attached SCSI (SAS). The SAS is a new generation serial communication protocol for devices designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses serial communication instead of the parallel method found in traditional SCSI devices but still uses SCSI commands.
• Word serial interfaces — connect to a host bus adapter (today typically integrated into the "south bridge") with two cables, one for data/control and one for power. The earliest versions of these interfaces typically had a 16 bit parallel data transfer to/from the drive and there are 8 and 32 bit variants. Modern versions have serial data transfer. The word nature of data transfer makes the design of a host bus adapter significantly simpler than that of the precursor HDD controller.
o Integrated Drive Electronics (IDE), later renamed to ATA, and then later to P-ATA ("parallel ATA", to distinguish it from the new Serial ATA). The original name reflected the innovative integration of HDD controller with HDD itself, which was not found in earlier disks. Moving the HDD controller from the interface card to the disk drive helped to standardize interfaces, and to reduce the cost and complexity. The 40 pin IDE/ATA connection of PATA transfers 16 bits of data at a time on the data cable. The data cable was originally 40 conductor, but later higher speed requirements for data transfer to and from the hard drive led to an "ultra DMA" mode, known as UDMA. Progressively faster versions of this standard ultimately added the requirement for an 80 conductor variant of the same cable; where half of the conductors provides grounding necessary for enhanced high-speed signal quality by reducing cross talk. The interface for 80 conductor only has 39 pins, the missing pin acting as a key to prevent incorrect insertion of the connector to an incompatible socket, a common cause of disk and controller damage.
o EIDE was an unofficial update (by Western Digital) to the original IDE standard, with the key improvement being the use of direct memory access (DMA) to transfer data between the disk and the computer without the involvement of the CPU, an improvement later adopted by the official ATA standards. By directly transferring data between memory and disk, DMA eliminates the need for the CPU and operating system to copy byte per byte. And can therefore process other tasks while the data transfer occurs.
o Small Computer System Interface (SCSI), originally named SASI for Shugart Associates System Interface, was an early competitor of ESDI. SCSI disks were standard on servers, workstations, Commodore Amiga and Apple Macintosh computers through the mid-90s, by which time most models had been transitioned to IDE (and later, SATA) family disks. Only in 2005 did the capacity of SCSI disks fall behind IDE disk technology, though the highest-performance disks are still available in SCSI and Fibre Channel only. The length limitations of the data cable allows for external SCSI devices. Originally SCSI data cables used single ended data transmission, but server class SCSI could use differential transmission, either low voltage differential (LVD) or high voltage differential (HVD).
Acronym or abbreviation Meaning Description
SASI
Shugart Associates System Interface Historical predecessor to SCSI.
SCSI
Small Computer System Interface Bus oriented that handles concurrent operations.

SAS
Serial Attached SCSI Improvement of SCSI, uses serial communication instead of parallel.
ST-506
Seagate Technology Historical Seagate interface.
ST-412
Seagate Technology Historical Seagate interface (minor improvement over ST-506).
ESDI
Enhanced Small Disk Interface Historical; backwards compatible with ST-412/506, but faster and more integrated.
ATA
Advanced Technology Attachment Successor to ST-412/506/ESDI by integrating the disk controller completely onto the device. Incapable of concurrent operations.
SATA
Serial ATA Modification of ATA, uses serial communication instead of parallel.
The technological resources and know-how required for modern drive development and production mean that as of 2007, over 98% of the world's HDDs are manufactured by just a handful of large firms: Seagate (which now owns Maxtor), Western Digital, Samsung, and Hitachi (which owns the former disk manufacturing division of IBM). Fujitsu continues to make mobile- and server-class disks but exited the desktop-class market in 2001, and is reportedly selling the rest to Western Digital[7]. Toshiba is a major manufacturer of 2.5-inch and 1.8-inch notebook disks. ExcelStor is a small HDD manufacturer.
Dozens of former HDD manufacturers have gone out of business, merged, or closed their HDD divisions; as capacities and demand for products increased, profits became hard to find, and the market underwent significant consolidation in the late 1980s and late 1990s. The first notable casualty of the business in the PC era was Computer Memories Inc. or CMI; after an incident with faulty 20 MB AT disks in 1985,[45] CMI's reputation never recovered, and they exited the HDD business in 1987. Another notable failure was MiniScribe, who went bankrupt in 1990 after it was found that they had engaged in accounting fraud and inflated sales numbers for several years. Many other smaller companies (like Kalok, Microscience, LaPine, Areal, Priam and PrairieTek) also did not survive the shakeout, and had disappeared by 1993; Micropolis was able to hold on until 1997, and JTS, a relative latecomer to the scene, lasted only a few years and was gone by 1999, after attempting to manufacture HDDs in India. Their claim to fame was creating a new 3″ form factor drive for use in laptops. Quantum and Integral also invested in the 3″ form factor; but eventually ceased support as this form factor failed to catch on. Rodime was also an important manufacturer during the 1980s, but stopped making disks in the early 1990s amid the shakeout and now concentrates on technology licensing; they hold a number of patents related to 3.5-inch form factor HDDs.

MOTHERBOARD



The main circuit board of a microcomputer. The motherboard contains the connectors for attaching additional boards. Typically, the motherboard contains the CPU, BIOS, memory, mass storage interfaces, serial and parallel ports, expansion slots, and all the controllers required to control standard peripheral devices, such as the display screen, keyboard, and disk drive. Collectively, all these chips that reside on the motherboard are known as the motherboard's chipset.
Prior to the advent of the microprocessor, a computer was usually built in a card-cage case or mainframe with components connected by a backplane consisting of a set of slots themselves connected with wires; in very old designs the wires were discrete connections between card connector pins, but printed-circuit boards soon became the standard practice. The central processing unit, memory and peripherals were housed on individual printed circuit boards which plugged into the backplane.
During the late 1980s and 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard (see above). In the late 1980s, motherboards began to include single ICs (called Super I/O chips) capable of supporting a set of low-speed peripherals: keyboard, mouse, floppy disk drive, serial ports, and parallel ports. As of the late 1990s, many personal computer motherboards support a full range of audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retain only the graphics card as a separate component.
The early pioneers of motherboard manufacturing were Micronics, Mylex, AMI, DTK, Hauppauge, Orchid Technology, Elitegroup, DFI, and a number of Taiwan-based manufacturers.

Popular personal computers such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment.
The term mainboard is archaically applied to devices with a single board and no additional expansions or capability. In modern terms this would include embedded systems, and controlling boards in televisions, washing machines etc. A motherboard specifically refers to a printed circuit with the capability to add/extend its performance/capabailities with the addition of "daughterboards".

On most PCs, it is possible to add memory chips directly to the motherboard. You may also be able to upgrade to a faster PC by replacing the CPU chip. To add additional core features, you may need to replace the motherboard entirely.
A motherboard is the central printed circuit board (PCB) in some complex electronic systems, such as modern personal computers. The motherboard is sometimes alternatively known as the mainboard, system board, or, on Apple computers, the logic board.[1] It is also sometimes casually shortened to mobo.[2]

Most computer motherboards produced today are designed for IBM-compatible computers, which currently account for around 90% of global PC sales[citation needed]. A motherboard, like a backplane, provides the electrical connections by which the other components of the system communicate, but unlike a backplane, it also hosts the central processing unit, and other subsystems and devices.
Motherboards are also used in many other electronics devices.
A typical desktop computer has its microprocessor, main memory, and other essential components on the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables, although in modern computers it is increasingly common to integrate some of these peripherals into the motherboard itself.

An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard.
Modern motherboards include, at a minimum:
• sockets (or slots) in which one or more microprocessors are installed[3]
• slots into which the system's main memory is installed (typically in the form of DIMM modules containing DRAM chips)
• a chipset which forms an interface between the CPU's front-side bus, main memory, and peripheral buses
• non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS
• a clock generator which produces the system clock signal to synchronize the various components
• slots for expansion cards (these interface to the system via the buses supported by the chipset)
• power connectors flickers, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards.
Main article: booting
Motherboards contain some non-volatile memory to initialize the system and load an operating system from some external peripheral device. Microcomputers such as the Apple II and IBM PC used read-only memory chips, mounted in sockets on the motherboard. At power-up, the central processor would load its program counter with the address of the boot ROM, and start executing ROM instructions, displaying system information on the screen and running memory checks, which would in turn start loading memory from an external or peripheral device (disk drive). If none is available, then the computer can perform tasks from other memory stores or display an error message, depending on the model and design of the computer and version of the BIOS.
Most modern motherboard designs use a BIOS, stored in an EEPROM chip soldered to the motherboard, to bootstrap the motherboard. (Socketed BIOS chips are widely used, also.) By booting the motherboard, the memory, circuitry, and peripherals are tested and configured. This process is known as a computer Power-On Self Test (POST) and may include testing some of the following devices:
• floppy drive
• network controller
• CD-ROM drive
• DVD-ROM drive
• SCSI hard drive
• IDE, EIDE, or SATA hard drive
• External USB memory storage device
Any of the above devices can be stored with machine code instructions to load an operating system or a program.
With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly-integrated motherboards are thus especially popular in small form factor and budget computers.
For example, the ECS RS485M-M,[6] a typical modern budget motherboard for computers based on AMD processors, has on-board support for a very large range of peripherals:
• disk controllers for a floppy disk drive, up to 2 PATA drives, and up to 6 SATA drives (including RAID 0/1 support)
• integrated ATI Radeon graphics controller supporting 2D and 3D graphics, with VGA and TV output
• integrated sound card supporting 8-channel (7.1) audio and S/PDIF output
• fast Ethernet network controller for 10/100 Mbit networking
• USB 2.0 controller supporting up to 12 USB ports
• IrDA controller for infrared data communication (e.g. with an IrDA enabled Cellular Phone or Printer)
• temperature, voltage, and fan-speed sensors that allow software to monitor the health of computer components
Expansion cards to support all of these functions would have cost hundreds of dollars even a decade ago, however as of April 2007 such highly-integrated motherboards are available for as little as $30 in the USA.
Peripheral card slots
A typical motherboard of 2007 will have a different number of connections depending on its standard. A standard ATX motherboard will typically have 1x PCI-E 16x connection for a graphics card, 2x PCI slots for various expansion cards and 1x PCI-E 1x which will eventually supersede PCI.
A standard Super ATX motherboard will have 1x PCI-E 16x connection for a graphics card. It will also have a varying number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. This varies between brands and models.
Some motherboards have 2x PCI-E 16x slots, to allow more than 2 monitors without special hardware or to allow use of a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming and video-editing.
As of 2007, virtually all motherboards come with at least 4x USB ports on the rear, with at least 2 connections on the board internally for wiring additional front ports that are built into the computer's case. Ethernet is also included now. This is a standard networking cable for connecting the computer to a network or a modem. A sound chip is always included on the motherboard, to allow sound to be output without the need for any extra components. This allows computers to be far more multimedia-based than before. Cheaper machines now often have their graphics chip built into the motherboard rather than a separate card.
Motherboards are generally air cooled with heat sinks often mounted on larger chips, such as the northbridge, in modern motherboards. If the motherboard is not cooled properly, then this can cause its computer to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPUs until the late 1990s; since then, most have required CPU fans mounted on their heatsinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional case fans as well. Newer motherboards have integrated temperature sensors to detect motherboard and CPU temperatures, and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Some higher-powered computers (which typically have high-performance processors and large amounts of RAM, as well as high-performance video cards) use a water-cooling system instead of many fans.
Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as careful layout of the motherboard and other components to allow for heat sink placement.
A 2003 study[7] found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation.[8]
For more information on premature capacitor failure on PC motherboards, see capacitor plague.
Motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours of operation at 105 °C,[9] their expected design life roughly doubles for every 10 °C below this. At 45 °C a lifetime of 15 years can be expected. This appears reasonable for a computer motherboard, however many manufacturers have delivered substandard capacitors,[citation needed] which significantly reduce life expectancy. Inadequate case cooling and elevated temperatures easily exacerbate this problem. It is possible, but tedious and time-consuming, to find and replace failed capacitors on PC motherboards; it is less expensive to buy a new motherboard than to pay for such a repair
Motherboards are produced in a variety of sizes and shapes ("form factors"), some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible commodity computers have been standardized to fit various case sizes. As of 2007, most desktop computer motherboards use one of these standard form factors—even those found in Macintosh and Sun computers which have not traditionally been built from commodity components.
Laptop computers generally use highly integrated, miniaturized, and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard due to the large number of integrated components.
Nvidia SLI and ATI Crossfire
Nvidia SLI and ATI Crossfire technology allows 2 or more of the same series graphics cards to be linked together to allow a faster graphics experience. Almost all medium to high end Nvidia cards and most high end ATI cards support the technology.
They both require compatible motherboards. There is an obvious need for 2x PCI-E 16x slots to allow 2 cards to be inserted into the computer. The same function can be achieved in 650i motherboards by NVIDIA, with a pair of x8 slots. Originally, tri-Crossfire was achieved at 8x speeds with 2 16x slots and 1 8x slot albeit at a slower speed. ATI opened the technology up to Intel in 2006 and such all new Intel chipsets support Crossfire.
SLI is a little more proprietary in its needs. It requires a motherboard with Nvidia's own NForce chipset series to allow it to run (exception: Intel X58 chipset).
It is important to note that SLI and Crossfire will not usually scale to 2x the performance of a single card when using a dual setup. They also do not double the effective amount of VRAM or memory bandwidth.

Tuesday, February 17, 2009

ETHERNET


Ethernet is the most widely-installed local area network ( LAN) technology. Specified in a standard, IEEE 802.3, Ethernet was originally developed by Xerox from an earlier specification called Alohanet (for the Palo Alto Research Center Aloha network) and then developed further by Xerox, DEC, and Intel. An Ethernet LAN typically uses coaxial cable or special grades of twisted pair wires. Ethernet is also used in wireless LANs. The most commonly installed Ethernet systems are called 10BASE-T and provide transmission speeds up to 10 Mbps. Devices are connected to the cable and compete for access using a Carrier Sense Multiple Access with Collision Detection (CSMA/CD ) protocol.
Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the Physical Layer of the OSI networking model, through means of network access at the Media Access Control (MAC) /Data Link Layer, and a common addressing format.
Ethernet was named by Robert Metcalfe, one of its developers, for the passive substance called "luminiferous (light-transmitting) ether" that was once thought to pervade the universe, carrying light throughout. Ethernet was so- named to describe the way that cabling, also a passive medium, could similarly carry data everywhere throughout the network.

Ethernet Network Elements
Ethernet LANs consist of network nodes and interconnecting media. The network nodes fall into two major classes:
• Data terminal equipment (DTE)—Devices that are either the source or the destination of data frames. DTEs are typically devices such as PCs, workstations, file servers, or print servers that, as a group, are all often referred to as end stations.
• Data communication equipment (DCE)—Intermediate network devices that receive and forward frames across the network. DCEs may be either standalone devices such as repeaters, network switches, and routers, or communications interface units such as interface cards and modems.
Throughout this chapter, standalone intermediate network devices will be referred to as either intermediate nodes or DCEs. Network interface cards will be referred to as NICs.
The current Ethernet media options include two general types of copper cable: unshielded twisted-pair (UTP) and shielded twisted-pair (STP), plus several types of optical fiber cable.

Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, is the most widespread wired LAN technology. It has been in use from around 1980[1] to the present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET.
Ethernet was originally developed at Xerox PARC in 1973–1975.[2] In 1975, Xerox filed a patent application listing Robert Metcalfe and David Boggs, plus Chuck Thacker and Butler Lampson, as inventors (U.S. Patent 4,063,220 : Multipoint data communication system with collision detection). In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper.[3]
The experimental Ethernet described in that paper ran at 3 Mbit/s, and had 8-bit destination and source address fields, so Ethernet addresses were not the global addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet which specifies the protocol being used.
Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it standardized the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit type field. The standard was first published on September 30, 1980. It competed with two largely proprietary systems, token ring and ARCNET, but those soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company.
Twisted-pair Ethernet systems have been developed since the mid-80s, beginning with StarLAN, but becoming widely known with 10BASE-T. These systems replaced the coaxial cable on which early Ethernets were deployed with a system of hubs linked with unshielded twisted pair (UTP), ultimately replacing the CSMA/CD scheme in favor of a switched full duplex system offering higher performance.
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are fundamental differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.
From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today underlies most LANs. The coaxial cable was replaced with point-to-point links connected by Ethernet hubs and/or switches to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network. The advent of twisted-pair wiring dramatically lowered installation costs relative to competing technologies, including the older Ethernet technologies.
Above the physical layer, Ethernet stations communicate by sending each other data packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.
Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily interconnected.
Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, obviating the need for installation of a separate network card.
IEEE 802.3 specifies a series of standards for telecommunication technology over Ethernet local-area networks. The following chart details the different Ethernet flavors and how they differ from one another.
Designation Description
10Base-2 10 Mbps baseband Ethernet over coaxial cable with a maximum distance of 185 meters. Also referred to as Thin Ethernet or Thinnet or Thinwire.
10Base-5 10 Mbps baseband Ethernet over coaxial cable with a maximum distance of 500 meters. Also referred to as Thick Ethernet or Thicknet or Thickwire.
10Base-36 10 Mbps baseband Ethernet over multi-channel coaxial cable with a maximum distance of 3,600 meters.
10Base-F 10 Mbps baseband Ethernet over optical fiber.

10Base-FB 10 Mbps baseband Ethernet over two multi-mode optical fibers using a synchronous active hub.

10Base-FL 10 Mbps baseband Ethernet over two optical fibers and can include an optional asynchronous hub.
10Base-FP 10 Mbps baseband Ethernet over two optical fibers using a passive hub to connect communication devices.
10Base-T 10 Mbps baseband Ethernet over twisted pair cables with a maximum length of 100 meters.
10Broad-36 10 Mbps baseband Ethernet over three channels of a cable television system with a maximum cable length of 3,600 meters.
10Gigabit Ethernet Ethernet at 10 billion bits per second over optical fiber. Multimode fiber supports distances up to 300 meters; single mode fiber supports distances up to 40 kilometers.
100Base-FX 100 Mbps baseband Ethernet over two multimode optical fibers.
100Base-T 100 Mbps baseband Ethernet over twisted pair cable.
100Base-T2 100 Mbps baseband Ethernet over two pairs of Category 3 or higher unshielded twisted pair cable.

100Base-T4 100 Mbps baseband Ethernet over four pairs of Category 3 or higher unshielded twisted pair cable.
100Base-TX 100 Mbps baseband Ethernet over two pairs of shielded twisted pair or Category 4 twisted pair cable.
100Base-X A generic name for 100 Mbps Ethernet systems.
1000Base-CX 1000 Mbps baseband Ethernet over two pairs of 150 shielded twisted pair cable.
1000Base-LX 1000 Mbps baseband Ethernet over two multimode or single-mode optical fibers using longwave laser optics.
1000Base-SX 1000 Mbps baseband Ethernet over two multimode optical fibers using shortwave laser optics.
1000Base-T 1000 Mbps baseband Ethernet over four pairs of Category 5 unshielded twisted pair cable.
1000Base-X A generic name for 1000 Mbps Ethernet systems.

The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with 8P8C modular connectors (not to be confused with FCC's RJ45).
Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular connectors (often called RJ45). They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has become steadily more selective about the cable it runs on and some installers have avoided 1000BASE-T for everything except short connections to servers.
Fiber optic variants of Ethernet are commonly used in structured cabling applications. These variants have also seen substantial penetration in enterprise datacenter applications, but are rarely seen connected to end user systems for cost/convenience reasons. Their advantages lie in performance, electrical isolation and distance, up to tens of kilometers with some versions. Fiber versions of a new higher speed almost invariably come out before copper. 10 gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with development starting on 40 Gbit/s [6][7] and 100 Gbit/s Ethernet. Metcalfe now believes commercial applications using terabit Ethernet may occur by 2015 though he says existing Ethernet standards may have to be overthrown to reach terabit Ethernet. [8]
A data packet on the wire is called a frame. A frame viewed on the actual physical wire would show Preamble and Start Frame Delimiter, in addition to the other data. These are required by all physical hardware. They are not displayed by packet sniffing software because these bits are removed by the Ethernet adapter before being passed on to the host (in contrast, it is often the device driver which removes the CRC32 (FCS) from the packets seen by the user).
The table below shows the complete Ethernet frame, as transmitted. Note that the bit patterns in the preamble and start of frame delimiter are written as bit strings, with the first bit transmitted on the left (not as byte values, which in Ethernet are transmitted least significant bit first). This notation matches the one used in the IEEE 802.3 standard.
802.3 MAC Frame
Preamble Start-of-Frame-Delimiter MAC destination MAC source Ethertype/Length Payload (Data and padding) CRC32
Interframe gap

7 octets of 10101010 1 octet of 10101011 6 octets 6 octets 2 octets 46-1500 octets 4 octets 12 octets
64-1518 octets
72-1526 octets
After a frame has been sent transmitters are required to transmit 12 octets of idle characters before transmitting the next frame. For 10M this takes 9600 ns, 100M 960 ns, 1000M 96 ns.
10/100M transceiver chips (MII PHY) work with 4-bits (nibble) at a time. Therefore the preamble will be 7 instances of 0101 + 0101, and the Start Frame Delimiter will be 0101 + 1101. 8-bit values are sent low 4-bit and then high 4-bit. 1000M transceiver chips (GMII) work with 8 bits at a time, and 10 Gbit/s (XGMII) PHY works with 32 bits at a time.
There are several types of Ethernet frames:
• The Ethernet Version 2 or Ethernet II frame, the so-called DIX frame (named after DEC, Intel, and Xerox); this is the most common today, as it is often used directly by the Internet Protocol.
• Novell's non-standard variation of IEEE 802.3 ("raw 802.3 frame") without an IEEE 802.2 LLC header.
• IEEE 802.2 LLC frame
• IEEE 802.2 LLC/SNAP frame
In addition, all four Ethernet frames types may optionally contain a IEEE 802.1Q tag to identify what VLAN it belongs to and its IEEE 802.1p priority (quality of service). This encapsulation is defined in the IEEE 802.3ac specification and increases the maximum frame by 4 bytes to 1522 bytes.
Varieties of Ethernet
Some early varieties
• 10BASE5: the original standard uses a single coaxial cable into which you literally tap a connection by drilling into the cable to connect to the core and screen. Largely obsolete, though due to its widespread deployment in the early days, some systems may still be in use.
• 10BROAD36: Obsolete. An early standard supporting Ethernet over longer distances. It utilized broadband modulation techniques, similar to those employed in cable modem systems, and operated over coaxial cable.
• 1BASE5: An early attempt to standardize a low-cost LAN solution, it operates at 1 Mbit/s and was a commercial failure.
Ethernet Basics
Ethernet is a local area technology, with networks traditionally operating within a single building, connecting devices in close proximity. At most, Ethernet devices could have only a few hundred meters of cable between them, making it impractical to connect geographically dispersed locations. Modern advancements have increased these distances considerably, allowing Ethernet networks to span tens of kilometers.
Protocols
In networking, the term protocol refers to a set of rules that govern communications. Protocols are to computers what language is to humans. Since this article is in English, to understand it you must be able to read English. Similarly, for two devices on a network to successfully communicate, they must both understand the same protocols.
Ethernet Terminology
Ethernet follows a simple set of rules that govern its basic operation. To better understand these rules, it is important to understand the basics of Ethernet terminology.
• Medium - Ethernet devices attach to a common medium that provides a path along which the electronic signals will travel. Historically, this medium has been coaxial copper cable, but today it is more commonly a twisted pair or fiber optic cabling.
• Segment - We refer to a single shared medium as an Ethernet segment.
• Node - Devices that attach to that segment are stations or nodes.
• Frame - The nodes communicate in short messages called frames, which are variably sized chunks of information.
Frames are analogous to sentences in human language. In English, we have rules for constructing our sentences: We know that each sentence must contain a subject and a predicate. The Ethernet protocol specifies a set of rules for constructing frames. There are explicit minimum and maximum lengths for frames, and a set of required pieces of information that must appear in the frame. Each frame must include, for example, both a destination address and a source address, which identify the recipient and the sender of the message. The address uniquely identifies the node, just as a name identifies a particular person. No two Ethernet devices should ever have the same address.
Ethernet Medium

LAN CARD


A Local Area Network (LAN) card is used to provide wireless Internet access to computer users in home or roaming networks. It works by exchanging signals with a router, which transmits the signals over a physically wired line. The LAN card became ubiquitous in Western society in the early part of the twenty first century, when the cards became affordable due to wireless networks springing up everywhere, from coffee shops to airports.
Most home Internet users use a LAN card for wireless Internet access so that multiple residents can be on the Internet at the same time. The router is placed in a central location in the home to provide even signal across the household. Wireless networks are also widespread on college campuses, so that students with laptops can use the Internet wherever they may be. The value of wireless to attract customers has been recognized by restaurants and other such businesses, who usually provide network access in exchange for a small fee or purchase of their product.
A LAN card communicates with the router using radio waves and an antenna. The computer converts data into binary form and sends it to the LAN card, which in turn broadcasts the signal to be picked up by the router. The router sends the information on in the form of packets of information, and bundles information for return to the computer via the LAN card in the same way. Usually wireless networks transmit at a relatively high frequency, ranging between 2.4 and 5Ghz, designed to accommodate the more rapid transfer of large amounts of data. When purchasing a router or LAN card, the packaging will indicate the frequency of the signal it uses.
A network card, network adapter, network interface controller (NIC), network interface card, or LAN adapter is a computer hardware component designed to allow computers to communicate over a computer network. It is both an OSI layer 1 (physical layer) and layer 2 (data link layer) device, as it provides physical access to a networking medium and provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.

Although other network technologies exist, Ethernet has achieved near-ubiquity since the mid-1990s. Every Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored in ROM carried on the card. Every computer on an Ethernet network must have a card with a unique MAC address. Normally it is safe to assume that no two network cards will share the same address, because card vendors purchase blocks of addresses from the Institute of Electrical and Electronics Engineers (IEEE) and assign a unique address to each card at the time of manufacture.
Whereas network cards used to be expansion cards that plug into a computer bus, the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. These either have Ethernet capabilities integrated into the motherboard chipset or implemented via a low cost dedicated Ethernet chip, connected through the PCI (or the newer PCI express bus). A separate network card is not required unless multiple interfaces are needed or some other type of network is used. Newer motherboards may even have dual network (Ethernet) interfaces built-in.
The card implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or token ring. This provides a base for a full network protocol stack, allowing communication among small groups of computers on the same LAN and large-scale network communications through routable protocols, such as IP.
There are four techniques used to transfer data, the NIC may use one or more of these techniques.
• Polling is where the microprocessor examines the status of the peripheral under program control.
• Programmed I/O is where the microprocessor alerts the designated peripheral by applying its address to the system's address bus.
• Interrupt-driven I/O is where the peripheral alerts the microprocessor that it's ready to transfer data.
• DMA is where the intelligent peripheral assumes control of the system bus to access memory directly. This removes load from the CPU but requires a separate processor on the card.
A network card typically has a twisted pair, BNC, or AUI socket where the network cable is connected, and a few LEDs to inform the user of whether the network is active, and whether or not there is data being transmitted on it. Network Cards are typically available in 10/100/1000 Mbit/s varieties. This means they can support a transfer rate of 10, 100 or 1000 Megabits per second.

Multiple users can maintain a connection to the router on different bands, to avoid interference, and are assigned unique identities by the router in the form of an IP address. Each user attempting to access the network will need a LAN card, which is either available built into the computer or as an external attachment which can be connected through a Universal Serial Bus port or PC card slot in a laptop.
Networks have a variety of security settings, with some being accessible to all users and other requiring a password to access the router. Even when a network is secured, a LAN card will be able to see it and list it as an available network, but when asked to connect will prompt the user for a password. It is recommended that wireless networks be secured to prevent the exploitation of vulnerabilities, and that users never connect to an unknown network.
A Network Interface Controller (NIC) is a hardware interface that handles and allows a network capable device access to a computer network such as the internet. The NIC has a ROM chip that has a unique Media Access Control (MAC) Address burned into it. The MAC address identifies the vendor MAC address which identifies it on the LAN. The NIC exists on both the ' Physical Layer' (Layer 1) and the 'Data Link Layer' (Layer 2) of the OSI model.
Sometimes the word 'controller' and 'card' is used interchangeably when talking about networking because the most common NIC is the Network Interface Card. Although 'card' is more commonly used, it is less encompassing. The 'controller' may take the form of a network card that is installed inside a computer, or it may refer to an embedded component as part of a computer motherboard, a router, expansion card, printer interface, or a USB device.
A MAC Address is a 48 bit network hardware identifier that is burned into a ROM chip on the NIC to identify that device on the network. The first 24 bits is called the Organizationally Unique Identifier (OUI) and is largely manufacturer dependent. Each OUI allows for 16,777,216 Unique NIC Addresses.
Smaller manufacturers that do not have a need for over 4096 unique NIC addresses may opt to purchase an Individual Address Block (IAB) instead. An IAB consists of the 24 bit OUI, plus a 12 bit extension (taken from the 'potential' NIC portion of the MAC address)
A wireless network interface controller (WNIC) is a network card which connects to a radio-based computer network, unlike a regular network interface controller (NIC) which connects to a wire-based network such as token ring or ethernet. A WNIC, just like a NIC, works on the Layer 1 and Layer 2 of the OSI Model. A WNIC is an essential component for wireless desktop computer. This card uses an antenna to communicate through microwaves. A WNIC in a desktop computer usually is connected using the PCI bus. Other connectivity options are USB and PC card. Integrated WNIC's are also available, (typically in Mini PCI/PCI Express Mini Card form).
A WNIC can operate in two modes known as infrastructure mode and ad hoc mode.
In an infrastructure mode network the WNIC needs an access point: all data is transferred using the access point as the central hub. All wireless nodes in an infrastructure mode network connect to an access point. All nodes connecting to the access point must have the same service set identifier (SSID) as the access point, and if the access point is enabled with WEP they must have the same WEP key or other authentication parameters.
In an ad-hoc mode network the WNIC does not require an access point, but rather can directly interface with all other wireless nodes directly. All the nodes in an ad-hoc network must have the same channel and SSID.
WNICs are designed around the IEEE 802.11 standard which sets out low-level specifications for how all wireless networks operate. Earlier interface controllers are usually only compatible with earlier variants of the standard, while newer cards support both current and old standards.

Specifications commonly used in marketing materials for WNICs include:
• Wireless data transfer rates (measured in Mbit/s); these range from 2 Mbit/s to 54 Mbit/s.[1]
• Wireless transmit power (measured in dBm)
• Wireless network standards (may include standards such as 802.11b, 802.11g, 802.11n, etc.) 802.11g offers data transfer speeds equivalent to 802.11a – up to 54 Mbit/s – and the wider 300-foot (91 m) range of 802.11b, and is backward compatible with 802.11b.
Wireless local area network standards
802.11
Protocol Release[2]
Freq.
(GHz) Typ throughput
(Mbit/s)
[citation needed]
Max net bitrate
(Mbit/s) Modulation rin.
(m) rout.
(m)

Jun 1997 2.4 00.9 002 IR/FH/DSSS ~20 ~100
a
Sep 1999 5 23 054 OFDM
~35 ~120
b
Sep 1999 2.4 04.3 011 DSSS
~38 ~140
g
Jun 2003 2.4 19 054 OFDM
~38 ~140
n
~ Jun 2010 2.4
5 74 300 OFDM
~70 ~250[3]

y
Nov 2008 3.7 23 054 OFDM
~50 ~5000

LOCAL AREA NETWORK


A local area network (LAN) is a computer network covering a small physical area, like a home, office, or small group of buildings, such as a school, or an airport. The defining characteristics of LANs, in contrast to wide-area networks (WANs), include their usually higher data-transfer rates, smaller geographic range, and lack of a need for leased telecommunication lines.
Ethernet over unshielded twisted pair cabling, and Wi-Fi are the two most common technologies currently, but ARCNET, Token Ring and many others have been used in the past.
LANs are capable of transmitting data at very fast rates, much faster than data can be transmitted over a telephone line; but the distances are limited, and there is also a limit on the number of computers that can be attached to a single LAN. local area network (LAN), a computer network dedicated to sharing data among several single-user workstations or personal computers, each of which is called a node. A LAN can have from two to several hundred such nodes, each separated by distances of several feet to as much as a mile, and should be distinguished from connections among computers over public carriers, such as telephone circuits, that are used for other purposes. Because of the relatively small areas involved, the nodes in a LAN can be connected by special high-data-rate cables.


Local area networks (LANs) are computer networks ranging in size from a few computers in a single office to hundreds or even thousands of devices spread across several buildings. They function to link computers together and provide shared access to printers, file servers, and other services. LANs in turn may be plugged into larger networks, such as larger LANs or wide area networks (WANs), connecting many computers within an organization to each other and/or to the Internet.
Because the technologies used to build LANs are extremely diverse, it is impossible to describe them except in the most general way. Universal components consist of the physical media that connect devices, interfaces on the individual devices that connect to the media, protocols that transmit data across the network, and software that negotiates, interprets, and administers the network and its services. Many LANs also include signal repeaters and bridges or routers, especially if they are large or connect to other networks.
The level of management required to run a LAN depends on the type, configuration, and number of devices involved, but in some cases it can be considerable.
The Local Area Network (LAN) is by far the most common type of data network. As the name suggests, a LAN serves a local area (typically the area of a floor of a building, but in some cases spanning a distance of several kilometers). Typical installations are in industrial plants, office buildings, college or university campuses, or similar locations. In these locations, it is feasible for the owning Organisation to install high quality, high-speed communication links interconnecting nodes. Typical data transmission speeds are one to 100 megabits per second.
A wide variety of LANs have been built and installed, but a few types have more recently become dominant. The most widely used LAN system is the Ethernet system developed by the Xerox Corporation.
In summary, a LAN is a communications network which is:
• local (i.e. one building or group of buildings)
• controlled by one administrative authority
• assumes other users of the LAN are trusted
• usually high speed and is always shared
A LAN messenger is an instant messaging program designed for use within a single local area network (LAN).

There are advantages using a LAN messenger over a normal instant messenger. The LAN messenger runs inside a company or private LAN, so only people who are inside the firewall will have access to the system. Communication data does not leave the LAN and the system can also not be spammed from the outside.
Types of LAN Cable - Computer Networking Cables
Cable Media
Cable media conduct either electricity or light. The common cable types are: Twisted pair cable, Coaxial cable and Fiber optic cable.
Twisted Pair TP Cable
Twisting the copper wires reduces crosstalk and signal emissions. Two insulated copper wires are twisted at constant intervals across the length to form a twisted pair. A single TP cable has multiple twisted pairs housed inside a common jacket. There are two types of TP:
• Unshielded Twisted Pair (UTP)
• Shielded Twisted Pair (STP)
Unshielded Twisted Pair TP Cable
The UTP cable has a set of twisted pairs with a simple plastic encasement. It is cheaper and easy to install. But it does not support very high-speed (100 Mbps) and has high attenuation. It is easily affected by EMI. There are five standard.
Shielded Twisted Pair TP Cable
STP cable is an insulated cable with the pairs of wires wrapped in a foil or mesh shielding. It is more immune to EMI than UTP cable and supports higher speed. But it is costlier than UTP cable.
Coaxial cable
Coaxial cable has two conductors with one inside the other. The inner conductor is either a solid copper wire or stands of copper. !t is covered by an insulating plastic foam. The foam is sur¬rounded by the outer conductor which is usually either a wire mesh tube or conductive foil wrap. !t acts as an EMI shield. An insulating sheath of PVC or Teflon covers the entire cable. The coaxial cable supports very high speeds and highly immune to EMI. But it is expensive.
Coaxial Cable Types:
No Standard Usage Remarks Impedance
1 RG-8 (IO Base 5) Thick Ethernet 50 ohm
2 RG-58 (IO Base 2) Thin Ethernet 50 ohm
3 RG-59 Cable TV 75 ohm
4 RG-62 ARC net 93 ohm
Fiber Optic Cable
Fiber optic cable transmits light pulses over a glass or plastic conductor. It consists of a glass or plastic core which carries light pulses. It is surrounded by layers of reflective glass called clad¬ding. A plastic spacer layer covers the cladding. It is in turn surrounded by a protective layer of fibers. The entire cable is protected by a tough outer sheath. The center core provides the light path or waveguide and the glass cladding is designed to refract light back into the core.
Fiber optic cable is lightweight than UTP and STP cables. It gives extremely low attenuation rates and supports upto 2 Gbps data rates. It is immune to EMI. But it is costly and complex to manufacture and install.
RJ-45 Lan Cable
Rj-45 Cables used for Peer to Peer and server based systems, digital values sections differ from other cables. Flow media sectors were established by the fast and secure transactions. Complete sections were distributed by the Switch and Hubs.
The shape of a local-area network (LAN) or other communications system. Topologies are either physical or logical.
There are four principal topologies used in LANs.
• bus topology: All devices are connected to a central cable, called the bus or backbone. Bus networks are relatively inexpensive and easy to install for small networks. Ethernet systems use a bus topology.
• ring topology : All devices are connected to one another in the shape of a closed loop, so that each device is connected directly to two other devices, one on either side of it. Ring topologies are relatively expensive and difficult to install, but they offer high bandwidth and can span large distances.
• star topology: All devices are connected to a central hub. Star networks are relatively easy to install and manage, but bottlenecks can occur because all data must pass through the hub.
• tree topology: A tree topology combines characteristics of linear bus and star topologies. It consists of groups of star-configured workstations connected to a linear bus backbone cable.
These topologies can also be mixed. For example, a bus-star network consists of a high-bandwidth bus, called the backbone, which connects a collections of slower-bandwidth star segments.

Wednesday, February 4, 2009

CPU (introduction and socket)


A central processing unit (CPU) is an electronic circuit that can execute computer programs. This broad definition can easily be applied to many early computers that existed long before the term "CPU" ever came into widespread usage. The term itself and its initialism have been in use in the computer industry at least since the early 1960s (Weik 1961). The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation has remained much the same.
Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are suited for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones to children's toys.
Prior to the advent of machines that resemble today's CPUs, computers such as the ENIAC had to be physically rewired in order to perform different tasks. These machines are often referred to as "fixed-program computers," since they had to be physically reconfigured in order to run a different program. Since the term "CPU" is generally defined as a software (computer program) execution device, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.
The idea of a stored-program computer was already present during ENIAC's design, but was initially omitted so the machine could be finished sooner. On June 30, 1945, before ENIAC was even completed, mathematician John von Neumann distributed the paper entitled "First Draft of a Report on the EDVAC." It outlined the design of a stored-program computer that would eventually be completed in August 1949 (von Neumann 1945). EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the large amount of time and effort it took to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the computer's memory. [1]
While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him such as Konrad Zuse had suggested similar ideas. Additionally, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.
Being digital devices, all CPUs deal with discrete states and therefore require some kind of switching elements to differentiate between and change these states. Prior to commercial acceptance of the transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational and eventually stop functioning altogether.[2] Usually, when a tube failed, the CPU would have to be diagnosed to locate the failing component so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster but less reliable than electromechanical (relay based) computers.


Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely (Weik 1961:238). In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.
The introduction of the microprocessor in the 1970s significantly affected the design and implementation of CPUs. Since the introduction of the first microprocessor (the Intel 4004) in 1970 and the first widely used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual vast success of the now ubiquitous personal computer, the term "CPU" is now applied almost exclusively to microprocessors.
Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size as a result of being implemented on a single die means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity to date.
While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.
The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback.
The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The location in program memory is determined by a program counter (PC), which stores a number that identifies the current position in the program. In other words, the program counter keeps track of the CPU's place in the current program. After an instruction is fetched, the PC is incremented by the length of the instruction word in terms of memory units.[3] Often the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below).
The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the decode step, the instruction is broken up into parts that have significance to other portions of the CPU. The way in which the numerical instruction value is interpreted is defined by the CPU's instruction set architecture(ISA).[4] Often, one group of numbers in the instruction, called the opcode, indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. Such operands may be given as a constant value (called an immediate value), or as a place to locate a value: a register or a memory address, as determined by some addressing mode. In older designs the portions of the CPU responsible for instruction decoding were unchangeable hardware devices. However, in more abstract and complicated CPUs and ISAs, a microprogram is often used to assist in translating instructions into various configuration signals for the CPU. This microprogram is sometimes rewritable so that it can be modified to change the way the CPU decodes instructions even after it has been manufactured.
After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can perform the desired operation. If, for instance, an addition operation was requested, an arithmetic logic unit (ALU) will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. The ALU contains the circuitry to perform simple arithmetic and logical operations on the inputs (like addition and bitwise operations). If the addition operation produces a result too large for the CPU to handle, an arithmetic overflow flag in a flags register may also be set.
The final step, writeback, simply "writes back" the results of the execute step to some form of memory. Very often the results are written to some internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but cheaper and larger, main memory. Some types of instructions manipulate the program counter rather than directly produce result data. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs.[5] Many instructions will also change the state of digits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, one type of "compare" instruction considers two values and sets a number in the flags register according to which one is greater. This flag could then be used by a later jump instruction to determine program flow.
After the execution of the instruction and writeback of the resulting data, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If the completed instruction was a jump, the program counter will be modified to contain the address of the instruction that was jumped to, and program execution continues normally. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously. This section describes what is generally referred to as the "Classic RISC pipeline," which in fact is quite common among the simple CPUs used in many electronic devices (often called microcontroller). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.
Most CPUs, and indeed most sequential logic devices, are synchronous in nature.[8] That is, they are designed and operate on assumptions about a synchronization signal. This signal, known as a clock signal, usually takes the form of a periodic square wave. By calculating the maximum time that electrical signals can move in various branches of a CPU's many circuits, the designers can select an appropriate period for the clock signal.
This period must be longer than the amount of time it takes for a signal to move, or propagate, in the worst-case scenario. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism (see below).

However architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided in order to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue as clock rates increase dramatically is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does heat dissipation, causing the CPU to require more effective cooling solutions.
One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs.[9] Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without utilizing a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS. Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers (Garside et al. 1999).
A CPU socket or CPU slot is a connector on a computer's motherboard that accepts a CPU and forms an electrical interface with it. As of 2007, most desktop and server computers, particularly those based on the Intel x86 architecture, include socketed processors.

Most CPU-sockets interfaces are based on the pin grid array (PGA) architecture, in which short, stiff pins on the underside of the processor package mate with holes in the socket. To minimize the risk of bent pins, zero insertion force (ZIF) sockets allow the processor to be inserted without any resistance, then grip the pins firmly to ensure a reliable contact after a lever is flipped.
As of 2007, land grid array (LGA) sockets are becoming increasingly popular, with several current and upcoming socket designs using this scheme. With LGA sockets, the socket contains pins that make contact with pads or lands on the bottom of the processor package.
In the late 1990s, many x86 processors fit into slots, rather than sockets. CPU slots are single-edged connectors similar to expansion slots, into which a PCB holding a processor is inserted. Slotted CPU packages offered two advantages: L2 cache memory could be upgraded by installing an additional chip onto the processor PCB, and processor insertion and removal was often easier. However, slotted packages require longer traces between the CPU and chipset, and therefore became unsuitable as clock speeds passed 500 MHz. Slots were abandoned with the introduction of AMD's Socket A and Intel's Socket 370.
AMD
Desktop
• Super Socket 7 - AMD K6-2, AMD K6-III; Rise mP6.
• Slot A - AMD Athlon
• Socket A (also known as "Socket 462", 462-contact PGA) - AMD socket supporting Athlon, Duron, Athlon XP, Athlon XP-M, Athlon MP, Sempron, and Geode processors.
• Socket 754 (754-contact PGA) - AMD single-processor socket featuring single-channel DDR-SDRAM. Supports AMD Athlon 64, Sempron, Turion 64 processors.
• Socket 939 (939-contact PGA) - AMD single-processor socket featuring dual-channel DDR-SDRAM. Supports Athlon 64, Athlon 64 FX to 1 GHz[4], Athlon 64 X2 to 4800+, Opteron 100-series processors . Superseded by Socket AM2 about 2 years after launch.
• Socket 940 (940-contact PGA) - AMD single and multi-processor socket featuring registered (ECC) DDR-SDRAM. Intended for Opteron servers, but also used for "SledgeHammer" series Athlon 64 FX processors .
• Socket AM2 (940-contact PGA) - AMD single-processor socket featuring DDR2-SDRAM. Replaces Socket 754 and Socket 939[4] (some confused Socket AM2 with "Socket 940" for server processors). Supports Athlon 64, Athlon 64 X2, Athlon 64 FX, Opteron, Sempron and Phenom processors.
• Socket AM2+ (940-contact PGA) - AMD socket for single processor systems. Features support for DDR2 and HyperTransport 3 with separated power lanes. (Replaces Socket AM2 , electrically compatible with Socket AM2). Supports Athlon 64, Athlon 64 X2, Athlon 64 FX, Opteron, and Phenom processors.
• Socket AM3 (938-contact PGA) - Future AMD socket for single processor systems. Features support for DDR3 and HyperTransport 3 with separated power lanes. Planned to launch Quarter 2 of 2009. Replaces Socket AM2+ with support for DDR3-SDRAM.
Mobile
• Socket 563 - AMD low-power mobile Athlon XP-M (563-contact ยต-PGA, mostly mobile parts).
• Socket 754
• Socket S1 - AMD socket for mobile platforms featuring DDR2-SDRAM. Replaces Socket 754 for mobile processors (638-contact PGA).
• Socket FS1 - future Fusion processors for notebook market with CPU and GPU functionality (codenamed Swift), supporting DDR3 SDRAM, to be released in 2009.
Server
• Socket 940 - AMD single and multi-processor socket featuring DDR-SDRAM. Supports AMD Opteron[4] (2xx and 8xx Series), Athlon 64 FX processors (940-contact PGA).
• Socket A
• Socket F (also known as "Socket 1207") - AMD multi-processor socket featuring DDR2-SDRAM. Supports AMD Opteron[4](2xxx and 8xxx Series) and Athlon 64 FX processors. Replaces Socket 940 (LGA 1207-contact), and partially compatible with Socket F+.
• Socket F+ - Future AMD multi-processor socket featuring higher speed HyperTransport interconnect of up to 2.6 GHz. Replacing Socket F but socket F processors remained supported for backward compatibility.
• Future processor which is in development under the Fusion project codename, will employ Socket FS1 and two other sockets.
• Socket G34 - successor to socket F+, originally planned as Socket G3 paired with Socket G3 Memory Extender for servers to expand memory.
Intel
Desktop
• Slot 1 - Intel Celeron, Pentium II, Pentium III
• Socket 370 - Intel Pentium III, Celeron; Cyrix III; VIA C3
• Socket 423 - Intel Pentium 4[5] and Celeron processors (Willamette core)
• Socket 478 - Intel Pentium 4, Celeron, Celeron D, Pentium 4 Extreme Edition[5], Pentium M
• Socket N (Northwood, Prescott, and Willamette cores)
• Socket B (LGA 1366 [6]) - a new socket for Intel CPUs incorporating the integrated memory controller and Intel QuickPath Interconnect. (Bloomfield)
• Socket T (also known as Socket 775 or LGA 775) - Intel Pentium 4, Pentium D, Celeron D, Pentium Extreme Edition, Core 2 Duo, Core 2 Extreme, Celeron[5], Xeon 3000 series, Core 2 Quad (Northwood, Prescott, Conroe, Kentsfield, Cedar Mill , Wolfdale and Yorkfield cores)
Mobile
• Socket 441 - Intel Atom
• Socket 479 - Intel Pentium M and Celeron M (Banias and Dothan cores)
• Socket 495 - Also known as PPGA-B495, used for Mobile P3 Coppermine and Celerons [7]
• Socket M - Intel Core Solo, Intel Core Duo and Intel Core 2 Duo (A little part of Merom Cores and all Yohan Cores)
• Micro-FCBGA - Intel Mobile Celeron, Core 2 Duo (mobile), Core Duo, Core Solo, Celeron M, Pentium III (mobile), Mobile Celeron
• Socket P - Intel-based; replaces Socket 479 and Socket M. Released May 9th, 2007. (Merom and Penryn Cores)
• Socket 956 - Intel Core 2 Duo (Penryn core)
Server
• Socket 8 - Intel Pentium Pro
• Slot 2 - Intel Pentium II Xeon, Pentium III Xeon
• Socket 603 - Intel Xeon (Northwood and Willamette cores)
• Socket 604 - Intel Xeon
• PAC418 - Intel Itanium
• PAC611 - Intel Itanium 2, HP PA-RISC 8800 and 8900
• Socket J (also known as Socket 771 or LGA 771) - Intel Xeon (Woodcrest core)
• Socket N - Intel Dual-Core Xeon LV
Others
• Socket 463 (also known as Socket NexGen) - NexGen Nx586
• Socket 499 - Alpha 21164
• Slot B - Alpha 21264
• Slotkets - adapters for using socket processors in bus-compatible slot motherboards
Early sockets
Prior to Intel's introduction of the proprietary Slot 1 in 1997, CPU sockets were de facto open standards and were often used by multiple manufacturers.[1]
• DIP socket (40 contacts) - Intel 8086, Intel 8088
• PLCC socket (68 contacts) - Intel 80186[2] [3]
• PLCC socket - Intel 80286
• PLCC socket - Intel 80386
• Socket 1 - 80486
• Socket 2 - 80486
• Socket 3 - 80486 (3.3 V and 5 V) and compatibles
• Socket 4 - Intel Pentium 60/66 MHz
• Socket 5 - Intel Pentium 75-133 MHz; AMD K5; IDT WinChip C6, WinChip 2
• Socket 6 - Designed for the 80486, but little used
• Socket 7 - Intel Pentium, Pentium MMX; AMD K6; some Cyrix CPUs