Processors are probably the most single interesting piece of hardware in your computer. They have a rich and neat history history, dating all the way back to 1971 with the first commercially available microprocessor, the Intel 4004. As you can imagine and have no doubt seen yourself, since then, technology has improved by leaps and bounds.
We’re going to show you a history of the processor, starting with the Intel 8086. It was the processor IBM chose for the first PC and only has a neat history from then on out.
Editor’s Note: This article was originally published in 2001, but as of December 2016, we’ve updated it to include new advancements in the field since then.
CPUs have gone through many changes through the few years since Intel came out with the first one. IBM chose Intel’s 8088 processor for the brains of the first PC. This choice by IBM is what made Intel the perceived leader of the CPU market. Intel remains the perceived leader of microprocessor development. While newer contenders have developed their own technologies for their own processors, Intel continues to remain more than a viable source of new technology in this market, with the ever-growing AMD nipping at their heels.
The first four generations of Intel processor took on the “8” as the series name, which is why the technical types refer to this family of chips as the 8088, 8086, and 80186. This goes right on up to the 80486, or simply the 486. The following chips are considered the dinosaurs of the computer world. PC’s based on these processors are the kind that usually sit around in the garage or warehouse collecting dust. They are not of much use anymore, but us geeks don’t like throwing them out because they still work. You know who you are.
- Intel 8086 (1978)
This chip was skipped over for the original PC, but was used in a few later computers that didn’t amount to much. It was a true 16-bit processor and talked with its cards via a 16 wire data connection. The chip contained 29,000 transistors and 20 address lines that gave it the ability to talk with up to 1 MB of RAM. What is interesting is that the designers of the time never suspected anyone would ever need more than 1 MB of RAM. The chip was available in 5, 6,, 8, and 10 MHz versions.
- Intel 8088 (1979)
The 8088 is, for all practical purposes, identical to the 8086. The only difference is that it handles its address lines differently than the 8086. This chip was the one that was chosen for the first IBM PC, and like the 8086, it is able to work with the 8087 math coprocessor chip.
- NEC V20 and V30 (1981)
Clones of the 8088 and 8086. They are supposed to be about 30% faster than the Intel ones, though.
- Intel 80186 (1980)
The 186 was a popular chip. Many versions have been developed in its history. Buyers could choose from CHMOS or HMOS, 8-bit or 16-bit versions, depending on what they needed. A CHMOS chip could run at twice the clock speed and at one fourth the power of the HMOS chip. In 1990, Intel came out with the Enhanced 186 family. They all shared a common core design. They had a 1-micron core design and ran at about 25MHz at 3 volts. The 80186 contained a high level of integration, with the system controller, interrupt controller, DMA controller and timing circuitry right on the CPU. Despite this, the 186 never found itself in a personal computer.
- Intel 80286 (1982)
A 16-bit, 134,000 transistor processor capable of addressing up to 16 MB of RAM. In addition to the increased physical memory support, this chip is able to work with virtual memory, thereby allowing much for expandability. The 286 was the first “real” processor. It introduced the concept of protected mode. This is the ability to multitask, having different programs run separately but at the same time. This ability was not taken advantage of by DOS, but future Operating Systems, such as Windows, could play with this new feature. On the the drawbacks of this ability, though, was that while it could switch from real mode to protected mode (real mode was intended to make it backwards compatible with the 8088’s), it could not switch back to real mode without a warm reboot. This chip was used by IBM in its Advanced Technology PC/AT and was used in a lot of IBM-compatibles. It ran at 8, 10, and 12.5 MHz, but later editions of the chip ran as high as 20 MHz. While these chips are considered paperweights today, they were rather revolutionary for the time period.
- Intel 386 (1985 – 1990)
The 386 signified a major increase in technology from Intel. The 386 was a 32-bit processor, meaning its data throughput was immediately twice that of the 286. Containing 275,000 transistors, the 80386DX processor came in 16, 20, 25, and 33 MHz versions. The 32-bit address bus allowed the chip to work with a full 4 GB of RAM and a staggering 64 TB of virtual memory. In addition, the 386 was the first chip to use instruction pipelining, which allows the processor to start working on the next instruction before the previous one is complete. While the chip could run in both real and protected mode (like the 286), it could also run in virtual real mode, allowing several reasl mode sessions to be run at a time. A multi-tasking operating system such as Windows was necessary to do this, though. In 1988, Intel released the 386SX, which was basically a low-fat version of the 386. It used the 16-bit data bus rather than the 32-bit, and it was slower, but it thus used less power and thus enabled Intel to promote the chip into desktops and even portables. In 1990, Intel released the 80386SL, which was basically an 855,00 transistor version of the 386SX processor, with ISA compatibility and power management circuitry.
386 chips were designed to be user friendly. All chips in the family were pin-for-pin compatible and they were binary compatible with the previous 186 chips, meaning that users didn’t have to get new software to use it. Also, the 386 offered power friendly features such as low voltage requirements and System Management Mode (SMM) which could power down various components to save power. Overall, this chip was a big step for chip development. It set the standard that many later chips would follow. It offered a simple design which developers could easily design for.
Intel 486 (1989 – 1994)
The 486 chip was the first processor from Intel that was designed to be upgradeable. Previous processors were not designed this way, so when the processor became obsolete, the entire motherboard needed to be replaced. With the 486, the same CPU socket could accommodate several different flavors of the 486. Initial 486 offerings were designed to be able to be upgraded using “OverDrive” technology. This means you can insert a chip with a faster internal clock into the existing system. Not all 486 systems could use OverDrive, since it takes a certain type of motherboard to support it.
The first member of the 486 family was the i486DX, but in 1991 they released the 486SX and 486DX/50. Both chips were basically the same, except that the 486SX version had the math coprocessor disabled (yes, it was there, just turned off). The 486SX was, of course, slower than its DX cousin, but the resulting reduced cost and power lent itself to faster sales and movement into the laptop market. The 486DX/50 was simply a 50MHz version of the original 486. The DX could not support future OverDrives while the SX processor could.
In 1992, Intel released the next wave of 486’s making use of OverDrive technology. The first models were the i486DX2/50 and i486DX2/66. The extra “2” in the names indicate that the normal clock speed of the processor is being effectively doubled using OverDrive, so the 486DX2/50 is a 25MHz chip being doubled to 50MHz. The slower base speed allowed the chip to work with existing motherboard designs, but allowed the chip internally to operate at the increased speed, thereby increasing performance.
Also in 1992, Intel put out the 486SL. It was virtually identical to vintage 486 processors, but it contained 1.4 million transistors. The extra innards were used by its internal power management circuitry, optimizing it for mobile use. From there, Intel released various 486 flavors, mixing SL’s with SX’s and DX’s at a variety of clock speeds. By 1994, they were rounding out their continued development of the 486 family with the DX4 Overdrive processors. While you might think these were 4X clock quadruplers, they were actually 3X triplers, allowing a 33 MHz processor to operate internally at 100 MHz.
Click here: Next Page
AM486DX Series (1994 – 1995)
Intel was not the only manufacturer playing in the sandbox at the time. AMD put out its AM486 series in answer to Intel’s counterpart. AMD released the chip in AM486DX4/75, AM486DX4/100, and AM486DX4/120 versions. It contained on-board cache, power management features, 3-volt operation and SMM mode. This made the chip fitting for mobiles in addition to desktops. The chip found its way into many 486-compatibles.
AMD AM5x86 (1995)
The Pentium (1993)
The Pentium family includes the 60/66/75/90/100/120/133/150/166/200 MHz clock speeds. The original 60/66 MHz versions operated on the Socket 4 setup, while all of the remaining versions operated on the Socket 7 boards. Some of the chips (75MHz – 133MHz) could operate on Socket 5 boards as well. Pentium is compatible with all of the older operating systems including DOS, Windows 3.1, Unix, and OS/2. Its superscalar design can execute two instructions per clock cycle. The two separate 8K caches (code cache and data cache) and the pipelined floating point unit increase its performance beyond the x86 chips. It had the SL power management features of the i486SL, but the capability was much improved. It has 273 pins that connect it to the motherboard. Internally, though, its really two 32-bit chips chained together that split the work. The first Pentium chips operated at 5 volts and thus operated rather hotly. Starting at the 100MHz version, the requirement was reduced to 3.3 volts. Starting at the 75MHz version, the chip also supported Symmetric Dual Processing, meaning you could use two Pentiums side by side in the same system.
The Pentium stayed around a long time. It was released in many different speeds as well as different flavors. In fact, Intel implemented an “s-spec” rating which is marked on each Pentium CPU which tells the owner some key data about the processor in order to make sure they have their motherboard set correctly. There were just so many different Pentiums out there that it became hard to tell. You can look up processor specs using the s-spec at the link below.
Related Link: Intel Processor Spec Finder
The Pentium Pro (1995-1999)
It has two separate 8K L1 cache (one for data and one for instructions), and up to 1 MB of onboard L2 cache in the same package. the onboard L2 cache increased performance in and of itself because the chip did not have to make use of an L2 cache on the motherboard itself. PPro is optimized for 32-bit code, so it will run 16-bit code no faster than a Pentium, which is a big drawback. It’s still a great processor for servers, being it can be in multiprocessor systems with 4 processors. Another good thing about the Pentium Pro is that with the use of a Pentium 2 overdrive processor, you have all the perks of a normal Pentium II, but the L2 cache is full speed, and you get the multiprocessor support of the original Pentium Pro.
Click here: Next Page
Cyrix 6×86 Series (1995)
Cyrix had had a reputation for lagging in the area of performance, and the M1 was not an exception. The chip used a weaker FPU than both AMD and Intel, meaning it could not keep up with the competition in areas such as 3D gaming or other math-intensive software. On top of that, the chip had a reputation for running hot. Users had to get CPU fans that could keep these hot processors cool enough to run stably. Cyrix tried to combat this issue with the 6x86L processor. This “low power” processor made use of a split voltage (3.3 volts for I/O and 2.8 volts internally).
The integration of MediaGX was actually spanned across two chips: the processor itself and the MediaGX Cx5510. The chip requires a specially designed motherboard. It is not Socket 7 compatible. As a result, it is really an outsider in relation to the other processors we were discussing, but being that it was on the timetrack of history for CPUs, it bears mentioning.
AMD K5 (1996)
While AMD was competing with Intel with their 5×86 processor, this chip was not a true Pentium alternative. In 1996, however, AMD released the K5. This chip was designed to go head to head with the Pentium processor. It was designed to fit right into Socket 7 motherboards, allowing users to drop K5’s into the motherboards they might have already had. The chip was fully compatible with all x86 software. In order to rate the speed of the chips, AMD devised the P-rating system (or PR rating). This number identified the speed as compared to the true Intel Pentium equivalent. K5’s ran from 75 MHz to 166 MHz (in P-ratings, that is). They contained 24KB of L1 cache and 4.3 million transistors. While the K5’s were nice little chips for what they were, AMD quickly moved on with their release of K6.
Pentium MMX (1997)
MMX was not the only improvement on the Pentium MMX. The dual 8K caches of the Pentium were doubled to 16 KB each. It also had improved dynamic branch prediction, a pipelined FPU, and an additional instruction pipe to allow faster instruction processing. With these and other improvements, the Pentium line of processor was extended even longer. The line lasted up until recently, and went up to 233 MHz. While new PCs with this processor are all but non-existent, there are many older PCs still using this processor and going strong.
AMD K6 (1997)
Cyrix 6x86MX (1997)
Well, Intel came up with MMX and AMD was already using it starting with the K6. So, Cyrix had to get in on the game as well. The 6x86MX, also dubbed “M2”, was Cyrix’s answer. This processor took on the MMX instruction set, as well as took an increased 64KB cache and an increase in speed. The first M2’s were 150 MHz chips, or a P-rating of PR166 (Yes, M2’s also used the P-rating system). The fastest ones operated at 333 MHz, or PR-466.
M2 was the last processor released by Cyrix as a stand-alone company. In 1999, Via Technologies acquired the Cyrix line from it’s parent company, National Semiconductor. At the same time, Via also acquired the Centaur processor division from IDT.
Click here: Next Page
Pentium II (1997)
One of the most noticeable changes in this processor is the change in the package style. Almost all of the Pentium class processors use the Socket 7 interface to the motherboard. Pentium Pro’s use Socket 8. Pentium II, however, makes use of “Slot 1”. The package-type of the P2 is called Single-Edge contact (SEC). The chip and L2 cache actually reside on a card which attaches to the motherboard via a slot, much like an expansion card. The entire P2 package is surrounded by a plastic cartridge. In addition to Intel’s departure into Slot 1, they also patented the new Slot 1 interface, effectively barring the competition from making competitor chips to use the new Slot 1 motherboards. This move, no doubt, demonstrates why Intel moved away from Socket 7 to begin with – they couldn’t patent it.
The original Pentium II was code-named “Klamath”. It ran at a paltry 66 MHz bus speed and ranged from 233MHz to 300MHz. In 1998, Intel did some slight re-working of the processor and released “Deschutes”. They used a 0.25 micron design technology for this one, and allowed a 100MHz system bus. The L2 cache was still separate from the actual processor core and still ran at only half speed. They would not rectify this issue until the release of the Celeron A and Pentium III. Deschutes ran from 333MHz to up to 450 MHz.
Intel had realized their mistake with the next edition of the Celeron, the Celeron 300A. The 300A came with 128KB of L2 cache on board. The L2 cache was on-die with the 300A, meaning it ran at full processor speed, not half speed like the Pentium II. This fact was great for Intel users, because the Celerons with full speed cache operated much better than the Pentium II’s with 512 KB of cache running at half speed. With this fact, and the fact that Intel unleashed the bus speed of the Celeron, the 300A became well-known in overclocking enthusiast circles. It quickly became known for the cheap chip you could buy and crank up to compete with the more expensive stuff.
The Celeron is available in two formats. The original Celerons used the patented Slot 1 interface. But, Intel later switched over to a PPGA format, or Plastic Pin Grid Array, also known as Socket 370. This new interface allowed reduced costs in manufacturing. It also allowed cheaper conversion from Socket 7 boards to Socket 370. Motherboard manufacturers found it easier to swap out a Socket 7 socket for a Socket 370 socket, more or less leaving the rest of the board the same. It was more involved to change designs over to a slotted board. Slot 1 Celerons ranged from the original 233MHz up to 433 MHz, while Celerons 300MHz and up were available in Socket 370.
AMD K6-2 & K6-3 (1998)
AMD was a busy little company at the time Intel was playing around with their Pentium II’s and Celerons. In 1998, AMD released the K6-2. The “2” shows that there are some enhancements made onto the proven K6 core, with higher speeds and higher bus speeds. They probably were also taking a page out of the Pentium “2” book. The most notable new feature of the K6-2 was the addition of 3DNow technology. Just as Intel created the MMX instruction set to speed multimedia applications, AMD created 3DNow to act as an additional 21 instructions on top of the MMX instruction set. With software designed to use the 3DNow instructions, multimedia applications get even more boost. Using 3DNow, larger L1 cache, on-die L2 cache and Socket 7 usability, the K6-2 gained ranks in the market without too much trouble. When used with Socket 7 boards that contained L2 cache on board, the integrated L2 cache on the processor made the motherboard cache considered L3 cache.
The K6-3 processor was basically a K6-2 with 256 KB of on-die L2 cache. The chip could compete well with the Pentium II and even Pentium III’s of the early variety. In order to eek out the full potential of the processor core, though, AMD fine tuned the limits of the processor, leading the K6-2 and K6-3 to be a bit picky. The split voltage requirements were pretty rigid, and as a result AMD held a list of “approved” boards that could tolerate such fine control over the voltages. Processor cooling was also an important issue with these chips due to the increased heat. In that regard, they were a bit like the Cyrix 6x86MX processors.
Pentium III (1999)
In April of 2000, Intel released their Pentium III Coppermine. While Katmai had 512 KB of L2 cache, Coppermine had half that at only 256 KB. But, the cache was located directly on the CPU core rather than on the daughtercard as typified in previous Slot 1 processors. This made the smaller cache an actual non-issue, because performance benefited. Coppermine also took on a 0.18 micron design and the newer Single Edge Contact Cartridge 2 (SECC 2) package. With SECC 2, the surrounding cartridge only covered one side of the package, as opposed to previous slotted processors. What’s more, Intel again saw the logic they had when they took Celeron over to Socket 370, so they eventually released versions of Coppermine in socket format. Coppermine also supported the 133 MHz front side bus. Coppermine proved to be a performance chip and it was and still is used by many PCs. Coppermine eventually saw 1+ GHz.
Click here: Next Page
AMD Athlon (1999)
Also notable with the release of Athlon was the entirely new system bus. AMD licensed the Alpha EV6 technology from Digital Equipment Corporation. This bus operated at 200MHz, faster than anything Intel was using. The bus had a bandwidth capability of 1.6 GB/s.
Athlon has gone through revisions and improvements and is still being used and marketed. In June of 2000, AMD released the Athlon Thunderbird. This chip came with an improved 0.18 micron design, on-die full speed L2 cache (new for Athlon), DDR RAM support, etc. It is a real workhorse of a chip and has a reputation for being able to be pushed well beyond the speed rating as assigned by AMD. Overclocker’s paradise. Thunderbird was also released in Socket A (or Socket 462) format, so AMD was now returning to its socketed roots just as Intel had already done by this time.
When AMD released the Palomino to the desktop market in October of 2001, they renamed the chip to Athlon XP, and also took on a slightly different naming jargon. Due to the way Palomino executes instructions, the chip can actually perform more work per clock cycle than the competition, namely Pentium IV. Therefore, the chips actually operate at a slower clock speed than AMD makes apparent in the model numbers. They chose to name the Athlon XP versions based on the speed rating of the processor as determined by AMD and their own benchmarking. So, for example, the Athlon XP 1600+ performs at 1.4 GHz, but the average computer user will think 1.6 GHz, which is what AMD wants. But, this is not to say that AMD is tricking anybody. In fact, these chips to perform like the Thunderbird at the rated speed, and perform quite well when stacked against the Pentium IV. In fact, the Athlon XP 1800+ can out-perform the Pentium IV at 2 GHz. Besides the naming, the XP was basically the same as the mobile Palomino released a few months earlier. It did boast a new packaging style that would help AMD’s release of 0.13 micron design chips later on. It also operated on the 133MHz front-side bus (266MHz when DDR taken into account). AMD continued to use the Palomino core until the release of the Athlon XP 2100+, which was the last Palomino.
In June of 2002, AMD announced the 0.13 micron Thoroughbred-based 2200+ processor. The move was more of a financial one, since there are no real performance gains between Palomino and Thoroughbred. Nonetheless, the smaller more means AMD can product more of them per silicon wafer, and that just makes sense. AMD is really taunting everyone with news of the coming ClawHammer core, which will be AMD’s next big move. But, with that chip still in the development and testing phase at this point, ClawHammer is not yet ready. Until it is, AMD will keep us mildly entertained with Thoroughbred and keep Intel sweating.
Celeron II (2000)
Just as the Pentium III was a Pentium II with SSE and a few added features, the Celeron II is simply a Celeron with a SSE, SSE2, and a few added features. The chip is available from 533 MHz to 1.1 GHz. This chip was basically an enhancement of the original Celeron, and it was released in response to AMD’s coming competition in the low-cost market with the Duron. The PSN of the Pentium III had been disabled in the Celeron II, with Intel stating that the feature was not necessary in the entry-level consumer market. Due to some inefficiencies in the L2 cache and still using the 66MHz bus (unless you overclock), this chip would not hold up too well against the Duron despite being based on the trusted Coppermine core. Celeron II would not be released with true 100 MHz bus support until the 800MHz edition, which was put out at the beginning of 2001.
In August of 2001, AMD released the Duron “Morgan”. This chip broke out at 950 MHz but quickly moved past 1 GHz. The Morgan processor core was the key to the improvement of Duron here, and it is comparable to the effect of the Palomino core on the Athlon. In fact, feature-wise, the Morgan core is basically the same as the Palomino core, but with 64 KB of L2 rather than 256 KB.
Click here: Next Page
Pentium IV (2000)
According to Intel, NetBurst is made up of four new technologies: Hyper Pipelined Technology, Rapid Execution Engine, Execution Trace Cache and a 400MHz system bus. Let’s look at the first three, since they require some explanation:
- Hyper Pipelined Technology
There are a couple of ways to increase the speed of a processor. One is to decrease the die size. Technology in this regard is developed quickly, but not quickly enough. The P5 core saw its limit quickly and so did the P6 core (which is why Pentium III was limited at around 1 GHz). The technology to move into a smaller die size was not yet ready at the time of the Willamette release, so Intel moved to plan B. Plan B is to change the design of the CPU pipeline so that it is wider, can accommodate more instructions. This is what Intel did. Hyper Pipelined Technology refers to Intel’s expanding of the CPU pipeline from 10 stages (of the P6) to 20 stages. This effectively makes the data pipe (bad term, but descriptive) wider, and allows each stage to do actually less per clock cycle than the P6 core. The fact that each stage actually does less per clock cycle is what gives this design room for expandability. It is analogous to expanding a street highway – you add more lanes and for awhile each lane has less traffic, but eventually traffic increases and the road can handle much more traffic. The tradeoff in simply expanding this pipeline to a bunch of stages is that it takes the processor longer to recover from mistakes in the branch level prediction, being that it has to basically start over with 20 stages rather than a shorter 10-stage pipeline. The P4, though, has a newly advanced branch predictor to help with this problem.
- Rapid Execution Engine
The Pentium IV contains 2 arithmetic logic units and they operate at twice the speed of the processor. While this might sound like absolute heaven, it is good to keep in mind that they had to do it this way due to the pipeline design in order to even keep integer performance up to that of the Pentium III. So, this is really a necessary design change due to the increase pipeline size.
- Execution Trace Cache
Intel also did some re-working of the P4’s internal cache in order to nullify the effects of a mistake in branch prediction that can be a real lag with a 20-stage pipeline. First, they increase the branch target buffer size to eight times that of the Pentium III. This cache is the area from which the branch predictor gets its data. Secondly, Intel reduced the size of the L1 data cache to only 8K in order to reduce the latency of the cache. This, no doubt, increases the need for the 256 KB on-die L2 cache, and the latency of that has been improved on the P4 as well. Lastly, Intel added a execution trace cache. This is a new cache that can hold instructions that are already decoded and ready for execution. This means that the processor does not have to again waste time decoding every instruction when a branch prediction error occurs. Instead, it can just go to this 12K cache and retrieve the operation and go.
The early Pentium 4’s made use of the Socket 423 interface. One of the reasons for the new interface is the addition of heatsink retention mechanisms to either side of the socket. This is a move to help owners avoid the dreaded mistake of crushing the CPU core by tightening the heatsink down on it too tightly. The retention bases hold the heat sink onto the CPU. Socket 423 was short-lived, and Pentium IV quickly moved to Socket 478 with the release of the 1.9 GHz. Also, P4 was, at its launch, associated exclusively with Rambus RDRAM. Intel was stuck in this agreement with Rambus, and this was an obvious hurdle for promotion, being that most computer users to not have Rambus and don’t wish to buy any. So, early retail P4’s actually came packaged with two 64MB sticks of RDRAM. With chipset support later coming, DDR mating with the Pentium IV eventually came.
Pentium IV’s, as you might expect, were and still are on the expensive end of things. The new core was quite big when compared to other processors and the cost to product it was innately higher. In early 2002, Intel announced a new edition of the Pentium IV based on the Northwood core. The big news with this is that Intel leaves the larger 0.18 Willamette core in favor of this new 0.13 micron Northwood. This shrunk the core and therefore allowed Intel to not only make Pentium IV’s cheaper but also make more of them. The core is still bigger than that of the Athlon XP, but this is explained by the fact that Intel increased the L2 cache from 256 KB to 512 KB for Northwood. This raises the transistor count from 42 million for Willamette to 55 million for Northwood. Northwood was first released in 2 GHz and 2.2 GHz versions, but the new design gives P4 room to move up to 3 GHz quite easily. It was recently released at 2.53 GHz using a 533 MHz front side bus. Other than that, Northwood is architecturally the same as Willamette.
Pentium M (2003)
Intended for mobile uses, the Pentium M’s focus was on power efficiency in order to significantly improve the battery life of a laptop or notebook. With that in mind, the Pentium M runs at a much lower average power consumption as well as a much lower heat output. It has a maximum Thermal Design Power (TDP) of 5-27W.
Despite not being based off of the Pentium IV, it runs a lower clock speed of the laptop version of the Pentium IV, but has similar performance capabilities. For instance, a typical Pentium M will clock in at 1.6GHz, but is more than capable of attaining or surpassing the performance of a Pentium 4-M that clocks in at 2.4GHz.
Click here: Next Page
Athlon 64, Athlon 64 X2 and Sempron (2003)
AMD’s Athlon 64 is the successor to the Athlon XP and is the second of AMD’s processors to to implement its own 64-bit architecture. The first processor to implement that 64-bit technology was the AMD Opteron, but that was targeted at commercial uses, such as servers and workstations. The Athlon 64, however, is the first 64-bit processor aimed at the consumer market. So, in a way, this is AMD’s first venture into 64-bit territory.
It’s worth noting that the Athlon 64 was only a single-core processor. However, AMD eventually launched an improved version of the Athlon 64, the Athlon 64 X2. This newer version launched in 2005 was the first dual-core desktop processor that was designed by AMD. In May of 2006, AMD released Athlon 64 X2 versions with AMD virtualization technology, commonly referred to as AMD-V.
Before that, AMD launched another processor called the Athlon 64 FX, which was intended towards hardware enthusiasts (like gamers). This is for a number of reasons, two of them being that its multipliers were always unlocked and that they always had the highest clock speeds of all the Athlons at launch. Eventually, AMD launched the Athlon 64 FX-60, which is when the Athlon 64 FX line went dual-core.
At the time of the Athlon 64’s launch, it was only available in Socket 754 and Socket 940. They introduced the Athlon 64 in Socket 754 largely because its onboard memory controller was incapable of running non-registered or unbuffered memory in dual-channel. Eventually, AMD launched the Athlon 64 on another socket — Socket 939. This was intended for the mainstream market with the dual-channel memory interface fix. This essentially replaces Socket 754, so Athlon 64s sold on Socket 754 were essentially moved to a budget line of processors.
Pentium 4 Prescott, Celeron D and Pentium D (2005)
The Pentium 4 Prescott was introduced in 2004 to mixed feelings. The Pentium 4 Prescott was the first core to use the 90nm semiconductor manufacturing process. Many weren’t happy with it because the Prescott was essentially a restructuring of the Pentium 4’s microarchitecture. While that’d normally be a good thing, there weren’t too many positives. Some programs were enhanced by the doubled cache as well as the SSE3 instruction set. Unfortunately, there were other programs that suffered because of the longer instruction pipeline.
It’s also worth noting that the Pentum 4 Prescott was able to achieve some pretty high clock speeds, but not nearly as high as Intel was hoping. One version of the Prescott was actually able to obtain speeds of 3.8GHz. Eventually, Intel released a version of the Prescott supporting Intel’s 64-bit architecture, Intel 64. To start out, these were only sold as the F-series to OEMs, but Intel eventually renamed it to the 5×1 series, which was sold to consumers.
Intel introduced another version of the Prentium 4 Prescott, which was the Celeron D. A major difference with them is that they sported double the L1 and L2 cache than the previous Willamette and Northwood desktop Celerons. Not only that, but you got the SSE3 instruction set and they were manufactured on Socket 478. The Celeron D overall was a major performance improvement over many of the previous NetBurst-based Celerons. While there were major performance improvements across the board, it had a huge problem — excessive heat.
Eventually, Intel would go on to refresh the Celeron D, but this time with 64-bit architecture. Unfortunately, Intel never built these with Socket 478, but with the LGA 775 socket type.
The true and overall better successor was the Intel Core 2 brand, which had a lot of success.
Click here: Next Page
Intel Core 2 (2006)
The Intel Core 2 line was really the first multi-core processors. This was a necessary route for Intel to go, as true multi-core processors are essentially a single component, but with two or more independent processing units. They’re often referred to as cores. With multiple cores like this, Intel is able to increase overall speed for programs, and therefore, opening the path to more demanding programs as we could see today. That’s not to say Intel or AMD are responsible for demanding programs today, but without high-end processors and breakthroughs in technology by them, we really wouldn’t have the hardware that can run those programs.
Core 2 branded processors came with a lot of neat technology. For instance, you had Intel’s own virtualization technology, 64-bit architecture, low power, and SSE4 (Streaming SIMD Extensions 4, a processor instruction set).
AMD Phenom & Phenom II (2007)
There were some issues with early Phenom processors where the system would lock-up in extremely rare instances. This is because of a flaw discovered in the translation lookaside buffer (TLB). Pretty much all early versions of the Phenom processor were affected, as it wasn’t fixed until version B3 of the Phenom processor in 2008. The processors without the bug also had a “xx50” model number (so, there would be the number “50” at the end of every model number, indicating that this was a processor without the bug).
After these issues, AMD eventually went ahead and launched a successor at the end of 2008, the Phenom II. The Phenom II comes in a lot of versions. They made dual-core, triple-core and quad-core variants in early 2009, but an improved quad-core model and a hex-core model came in around early to mid 2010. Again, it’s based off of the K10 microarchitecture, but it’s also built off of the 45nm semiconductor manufacturing process. The Phenom II initially launched on the Socket AM2+, but Socket AM3 versions launched in early 2009 with with DDR3 support.
Click here: Next Page
Intel Core i3, Core i5, and Core i7 (2008 – present)
Truth be told, there’s nothing more confusing than Intel’s name convention here: Core i3, Core i5 and Core i7. What is that supposed to even mean? It’s confusing — particularly to the lay person — but hopefully I can give you the difference between the three tiers in plain language.
The Core i5 is a tad bit more confusing. In mobile applications, the Core i5 has two cores and hyperthreading. Desktop variants have 4 cores (quad-core), but no hyperthreading. With it, you get improved onboard graphics as well as Turbo Boost, a way to temporarily accelerate processor performance when you need a little more heavy lifting.
And that brings us to the Core i7. All Core i7 processors feature the aforementioned hyperthreading technology missing from the Core i5. But, a Core i7 can have anywhere from two cores in a mobile application (i.e. an ultrabook) all the way up to a whopping 8 cores in a workstation. Most commonly in the real-world, you’ll no doubt mostly see quad-core variations. Not only that, but the Core i7 can support as little as 2 memory sticks all the way up to 8.
Nehalem and Westmere
The first generation of Core i5 and i7 processors was known as the Nehalem microarchitecture. As a general overview, it was based on the 45nm process, feature higher clock speeds and improved power efficiency. It does have hyperthreading, but Intel did reduce the L2 Cache size. To compensate, the on-die L3 Cache size was increased and is shared among all cores.
With the Nehalem architecture, you get onboard Intel HD graphics as well as a native memory controller that is capable of supporting two to three memory channels of DDR3 SDRAM or four FB-DIMM2 channels.
As you might’ve noticed, Nehalem doesn’t encompass the Core i3; however, the Westmere microarchitecture does, which was introduced in 2010. Core i5 and Core i7 was available under Nehalem, but Core i3 wasn’t introduced until 2010 alongside the Westmere architecture. Under Westmere, you could get processors up to 10 cores (the Westmere-EX) with clock speeds reaching up to 4.4GHz in some cases. New sets of instructions would allow for up to 3x the encryption and decryption rate than ever before. And, of course, you have those integrated graphics and better virtualization latency.
Sandy Bridge and Ivy Bridge
Ivy Bridge has some significant improvements over Sandy Bridge. This includes support for PCI Express 3.0, 16-bit Floating-point conversion instructions, multiple 4K video playback, and support for up to 3 displays. As far as actual numbers go, there’s about a 6% increase in CPU performance compared to Sandy Bridge. But, you get anywhere between 25% and a 68% increase in GPU performance.
Haswell and Broadwell
Now, the successor to Ivy Bridge was Haswell, which was introduced in 2013. Many of the features that were in Ivy Bridge carried over to Haswell, but there’s also plenty of new features, too. Socket-wise, it came in the LGA 1150 and LGA 2011. Graphics support for Direct3D 11.1 and OpenGL 4.3 was brought on as well as support for Thunderbolt technology. There were also four versions of the integrated GPU — the GT1, GT2, GT3 and GT3e. The GT3 was capable of 40 execution units. In contrast, Ivy Bridge’s was capable of just 16 execution units. It also came with a ton of new instruction sets — AVX, AVX2, BMI1, BMI2, FMA3, and AES-NI. With the Haswell microarchitecture, these instruction sets are available to the Core i3, Core i5 and Core i7. Depending on the type of processor you bought, clock speeds could reach all the way up to 4GHz at a normal operating frequency.
New features are primarily video-related. With Broadwell, you get Intel Quick Sync Video, which adds VP8 hardware encoding and decoding. There’s support for VP9 and HEVC decoding as well. With the changes being pretty video-related, there’s added support for Direct3D 11.2 and OpenGL 4.4, too. As far as clock speed goes, base mainstream processors start out at 3.1GHz and can be Turbo Boosted to 3.6GHz. Performance variants have a base of 3.3GHz, but can be Turbo Boosted to 3.7GHz.
Skylake, Kaby Lake and Cannonlake
As far as actual features go, you get support for Thunderbolt 3.0, SATA Express and an upgrade to Iris Pro graphics. Skylake actually retires VGA support and adds capabilities for up to 5 displays. Two new instruction sets were also added — Intel MPX, Intel SGX and AVX-512. And on the mobile side of things, Skylake CPUs are actually capable of being overclocked.
Kaby Lake is the most recent generation of Intel CPUs, having been announced just a few months ago in August 2016. Built on the same 14nm process, Kaby Lake brings much of the trend we’ve already been seeing — better CPU clock speeds and clock speed changes. New graphics architecture was also added to Kaby Lake to improve the performance of 3D graphics and 4K video playback. Beyond that, there weren’t any major changes over Skylake, just a lot of little alterations here and there.
Click here: Next Page
New Mobile Technology (Intel, 2008 – present)
Processors intended for mobile and embedded use are very much needed in our growing mobile-first world. While Intel has met some of that need with variations of Skylake and other processors, the Intel Atom is more of a true mobile processor, as that’s the goal of the Atom — to meet the needs of mobile equipment.
The Intel Atom originally launched in 2008, aimed at providing a solution for netbooks and a variety of embedded applications in different industries, such as health care. It was originally designed on the 45nm process, but in 2012 was brought all the way down to the 22nm process. The first generation of Atom processors were actually based on the Bonnell microarchitecture.
Like we said, the Atom is used in many different embedded applications within a variety of industries. In comparison to the rest of the processors we listed, it’s a pretty unknown processor. But, it does power a large amount of health care equipment as well as equipment for other services we use.
At least for those that follow technology blogs, the Intel Atom made more of a name for itself when Intel partnered with Google in 2012 to provide support for Google’s Android mobile operating system on Intel x86 processors. That said, Intel began offering a new system-on-a-chip (SoC) platform with its Atom line of processors. Early on, there were some overheating issues, but Intel eventually worked out the issues.
Unfortunately, the SoC market is already a crowded industry with fierce competition from Samsung, Qualcomm, NVIDIA, Texas Instruments and so many more. That said, Intel has essentially given up on the smartphone and tablet, throwing away billions of dollars the company spent trying to expand into it. Like we said, it’s a market with fierce competition, and Intel didn’t see a place for itself there anymore. The most recent development is that they cancelled two new Atom chips intended for the smartphone market — Sofia and Broxton. We haven’t heard anything since then.
Click here: Next Page
AMD APU’s (2011 – present)
AMD launches a new line of processors called the Accelerated Processing Unit (APU). It is, of course, a line of 64-bit processors, but is innovative because it’s designed to act as a CPU and GPU on a single chip (so, you’d have your regular CPU, but also an on-die GPU). The first generation of APUs announced in 2011 was Llano and Brazos. The former was designed for high performance situations while the latter was geared towards low-power devices. Trinity and Brazos-2 was announced in 2012 — Trinity for a high performance solution and Brazos-2, again, for a low-power offering. Kaveri was the third generation core, announced in early 2014 for high-performance. In the summer of 2013, Kabini and Temash were announced and intended for low-power hardware.
The AMD APU started out as just a project — the AMD Fusion project. It all started in 2006 when AMD wanted to create a system-on-a-chip (SoC) that combined the CPU with a on-die GPU. And that’s how the AMD APU got started.
There’s a lot of neat technology embedded in it — out-of-order execution, SSE5/AVX4 instruction, and they came on both the FM1 and FM2 sockets. It wouldn’t necessarily be surprising if you hadn’t heard of the AMD APU before, but despite that, it’s likely many tech enthusiasts and average gamers use the chip everyday. Both Sony’s PlayStation 4 and Microsoft’s Xbox One use custom versions of third generation low-power APUs.
AMD FX (2010 – present)
And then you have the AMD FX microprocessor. It’s most definitely not a successor to the AMD APUs, but something sold alongside them that directly compete with Intel’s Sandy Bridge and Ivy Bridge architectures. The AMD FX processors are actually geared more towards the high performance market, while the AMD APUs have a wider range of markets (low power and high performance) to cover.
When it initially launched in 2011, it was built off of the Bulldozer microarchitecture. In 2012, the Piledriver architecture succeeded that. Both of these architectures use a modular design to put two-cores on a single module. But, another successor is coming in 2017 — the Zen microarchitecture. It will use the 14nm process, feature SMT (a version of Intel’s hyperthreading) and will employ the AM4 socket, which provides support for DDR4 RAM.
You can actually get a significant amount of performance out of the AMD FX series. All of the cores (or CPUs) in this series are all unlocked and overclockable, allowing you to seriously push the clock speed on these processors. For instance, using liquid nitrogen for cooling, the AMD FX-8370 was able to set a world record for clock speed — 8722.78MHz or a little over 8.7GHz.
Since the FX series are high performance processors, they also have a high TDP — up to a whopping 220W.
Intel offers some serious power with their currently line of Core i7 processors, but the AMD FX series takes the cake for the highest performance chips for consumer PCs. The drawback is that there’s no onboard GPU, but when you’re seeking power like this, you might rather have a dedicated video card anyway. It’ll certainly be interesting to see what 2017 and beyond brings with the competition between AMDs upcoming Zen microarchitecture and Intel’s Kaby Lake and Cannonlake architecture.
And that wraps up the timeline of the many different processors out there, at least for the time of this writing. Processor technology is an interesting concept, and if you read about the different CPUs, you’ll notice the trend of them getting smaller, yet more powerful. It’ll no doubt be interesting to see what we have in another 10 or twenty years down the road.
Keep in mind that this is a timeline we plan on keeping updated, so as new CPU generations release, be sure to check back here for new information!