A detailed history of the processor

Pentium II (1997)

Intel made some major changes to the processor scene with the release of the Pentium II. They had the PentiumMMX and Pentium Pro’s out into the market in a strong way, and they wanted to bring the best of both into one chip. As a result, the Pentium II is kind of like the child of a Pentium MMX mother and the Pentium Pro Father. But like real life, it doesn’t necessarily combine the best of it’s parents. Pentium II is optimized for 32-bit applications. It also contains the MMX instruction set, which is almost a standard by this time. The chip uses the dynamic execution technology of the Pentium Pro, allowing the processor to predict coming instructions, accelerating work flow. It actually analyzes program instruction and re-orders the schedule of instructions into an order that can be run the quickest. Pentium II has 32KB of L1 cache (16KB each for data and instructions) and has a 512KB of L2 cache on package. The L2 cache runs at ½ the speed of the processor, not at full speed. Nonetheless, the fact that the L2 cache is not on the motherboard, but instead in the chip itself, boosts performance.

One of the most noticeable changes in this processor is the change in the package style. Almost all of the Pentium class processors use the Socket 7 interface to the motherboard. Pentium Pro’s use Socket 8. Pentium II, however, makes use of “Slot 1”. The package-type of the P2 is called Single-Edge contact (SEC). The chip and L2 cache actually reside on a card which attaches to the motherboard via a slot, much like an expansion card. The entire P2 package is surrounded by a plastic cartridge. In addition to Intel’s departure into Slot 1, they also patented the new Slot 1 interface, effectively barring the competition from making competitor chips to use the new Slot 1 motherboards. This move, no doubt, demonstrates why Intel moved away from Socket 7 to begin with – they couldn’t patent it.

The original Pentium II was code-named “Klamath”. It ran at a paltry 66 MHz bus speed and ranged from 233MHz to 300MHz. In 1998, Intel did some slight re-working of the processor and released “Deschutes”. They used a 0.25 micron design technology for this one, and allowed a 100MHz system bus. The L2 cache was still separate from the actual processor core and still ran at only half speed. They would not rectify this issue until the release of the Celeron A and Pentium III. Deschutes ran from 333MHz to up to 450 MHz.

Celeron (1998)

About the time Intel was releasing the improved P2’s (Deschutes), they decided to tackle the entry level market with a stripped down version of the Pentium II, the Celeron. In order to decrease costs, Intel removed the L2 cache from the Pentium II. They also removed the support for dual processors, an ability that the Pentium II had. Additionally, they ditched the plastic cover which the P2 had, leaving simply the processor on the Slot 1 style card. This, no doubt, reduced the cost of the processor quite a bit, but performance suffered noticeably. Removing the L2 cache from a chip seriously hampers its performance. On top of that, the chip was still limited to the 66MHz system bus. As a result, competitor chips at the same clock speeds could still outperform the Celeron. What was the point?

Intel had realized their mistake with the next edition of the Celeron, the Celeron 300A. The 300A came with 128KB of L2 cache on board. The L2 cache was on-die with the 300A, meaning it ran at full processor speed, not half speed like the Pentium II. This fact was great for Intel users, because the Celerons with full speed cache operated much better than the Pentium II’s with 512 KB of cache running at half speed. With this fact, and the fact that Intel unleashed the bus speed of the Celeron, the 300A became well-known in overclocking enthusiast circles. It quickly became known for the cheap chip you could buy and crank up to compete with the more expensive stuff.

The Celeron is available in two formats. The original Celerons used the patented Slot 1 interface. But, Intel later switched over to a PPGA format, or Plastic Pin Grid Array, also known as Socket 370. This new interface allowed reduced costs in manufacturing. It also allowed cheaper conversion from Socket 7 boards to Socket 370. Motherboard manufacturers found it easier to swap out a Socket 7 socket for a Socket 370 socket, more or less leaving the rest of the board the same. It was more involved to change designs over to a slotted board. Slot 1 Celerons ranged from the original 233MHz up to 433 MHz, while Celerons 300MHz and up were available in Socket 370.

AMD K6-2 & K6-3 (1998)

AMD was a busy little company at the time Intel was playing around with their Pentium II’s and Celerons. In 1998, AMD released the K6-2. The “2” shows that there are some enhancements made onto the proven K6 core, with higher speeds and higher bus speeds. They probably were also taking a page out of the Pentium “2” book. The most notable new feature of the K6-2 was the addition of 3DNow technology. Just as Intel created the MMX instruction set to speed multimedia applications, AMD created 3DNow to act as an additional 21 instructions on top of the MMX instruction set. With software designed to use the 3DNow instructions, multimedia applications get even more boost. Using 3DNow, larger L1 cache, on-die L2 cache and Socket 7 usability, the K6-2 gained ranks in the market without too much trouble. When used with Socket 7 boards that contained L2 cache on board, the integrated L2 cache on the processor made the motherboard cache considered L3 cache.

The K6-3 processor was basically a K6-2 with 256 KB of on-die L2 cache. The chip could compete well with the Pentium II and even Pentium III’s of the early variety. In order to eek out the full potential of the processor core, though, AMD fine tuned the limits of the processor, leading the K6-2 and K6-3 to be a bit picky. The split voltage requirements were pretty rigid, and as a result AMD held a list of “approved” boards that could tolerate such fine control over the voltages. Processor cooling was also an important issue with these chips due to the increased heat. In that regard, they were a bit like the Cyrix 6x86MX processors.

Pentium III (1999)

Intel released the Pentium III “Katmai” processor in February of 1999, running at 450 MHz on a 100MHz bus. Katmai introduced the SSE instruction set, which was basically an extension of MMX that again improved the performance on 3D apps designed to use the new ability. Also dubbed MMX2, SSE contained 70 new instructions, with four simultaneous instructions able to be performed simultaneously. This original Pentium III worked off what was a slightly improved P6 core, so the chip was well suited to multimedia applications. The chip saw controversy, though, when Intel decided to include integrated “processor serial number” (PSN) on Katmai. the PSN was designed to be able to be read over a network, even the internet. The idea, as Intel saw it, was to increase the level of security in online transactions. End users saw it differently. They saw it as an invasion of privacy. After taking a hit in the eye from the PR perspective and getting some pressure from their customers, Intel eventually allowed the tag to be turned off in the BIOS. Katmai eventually saw 600 MHz, but Intel quickly moved on to the Coppermine.

In April of 2000, Intel released their Pentium III Coppermine. While Katmai had 512 KB of L2 cache, Coppermine had half that at only 256 KB. But, the cache was located directly on the CPU core rather than on the daughtercard as typified in previous Slot 1 processors. This made the smaller cache an actual non-issue, because performance benefited. Coppermine also took on a 0.18 micron design and the newer Single Edge Contact Cartridge 2 (SECC 2) package. With SECC 2, the surrounding cartridge only covered one side of the package, as opposed to previous slotted processors. What’s more, Intel again saw the logic they had when they took Celeron over to Socket 370, so they eventually released versions of Coppermine in socket format. Coppermine also supported the 133 MHz front side bus. Coppermine proved to be a performance chip and it was and still is used by many PCs. Coppermine eventually saw 1+ GHz.

Click here: Next Page

Pages: 1 2 3 4 5 6 7 8 9 10 11


  1. Your Notes are very clear and Excelent. If you can update to current, it is very good.

    Thank you

  2. Your notes have helped me a lot about something i’ve been looking for in the past week.your Your notes are excelent thanx

  3. Just to point out that you state the 80186 never made it into a personal computer, however i owned a 186 system around 1992 that was made my Research Machines.
    Just thought you would like to know.

    • There was also a Tandy / Radio Shack PC that used an 80186. Just one model that didn’t last for more than a year. Their usual black and silver case. I can’t swear that it was 100% compatible with the usual instruction sets that software depended on.

      • Hugh Wyn Griffith says:

        That Tandy 186 was the Tandy 2000 and its graphics were not 100% compatible with Windows much to the distress of users (I was one when I bought my first “almost-PC” in the UK back in the ’90’s). This caused a lot of ill feeling between users and Tandy. The Users Group launched a monthly called “Orphans” and hated Ed Juge (who died recently) the then CEO of Tandy for not providing any support.

        I was amused a few years ago when Googling on Tandy 2000 to pull up a full page advert for it from one of the well known magazines at that time in which Bill Gates lauded it saying how much his programmers depended on it for its performance! Might explain some of Windows problems if they were using a non-conforming PC !

    • u probably have the one that was made in 1990 then that was the 1 that did make it in2 the personal computer as is later stated in there

  4. The 5×86 was not AMD’s answer to the Pentium, the P5 was. The 5×86 was made to offer a greater performance boost to the millions of 486 PC’s out there, as it would work in (almost) any 486 motherboard with a socketed CPU or overdrive socket.

  5. Chris, It doesn’t say it was AMD’s answer. It was their “competitive response to Intel’s Pentium-class processor”
    on a 486 motherboard.

    Also, not mentioned is why Intel went from a number designation to a name title, the number, was actually the stock number. As I was told by a Intel Rep. at a Comdex show (Vegas) ’94-’95. As Intel tried to sue AMD for copy right infringement. Like a fragrance, you can’t CR. the recipe only the name. They lost on the grounds, you can’t copy right a stock number (80486)! So they, Intel started using name designation (Pentium). As well as AMD did the same.

  6. This is great, im supposed to be at work, but im reading this, just spent quite a while reading it. Its very interesting, Thank You

  7. Bill Buchanan says:

    Correction to information provided on the Intel 80186 (1980).
    This Processor was used in one desk top system but the system did not sell well. The company was Tandy and the model was Tandy 2000. There is a very good page at: http://www.old-computers.com/museum/computer.asp?c=1219 covering the processor.

  8. Mick Russom says:

    Acorn’s Master 512 PC had a 10MHz 80186 CPU which ran MS-DOS and GEM. I would say this qualifies it as a “PC” running a 80186 running MS-DOS.

  9. Sandy Jelusic says:

    I have at home an pc desktop powered by an 8088 at 3.5 mhz with turbo mode, black-yellow monitor, 20mb disk and only 5.25” floppy. As for dos I think it’s ibm-dos. Not really certain.

  10. Very knowledgeful. Please update with latest changes.

  11. Really your services are good we like it please keep it up.

  12. chelle-marie says:

    that is great i loved the little joke:

    “The following chips are considered the dinosaurs of the computer world. PC’s based on these processors are the kind that usually sit around in the garage or warehouse collecting dust. They are not of much use anymore, but us geeks don’t like throwing them out because they still work. You know who you are.”

    sounds just like my tech teacher becouse he is always complaining about how things have changed and shows us pictures from back when computers still used tapes and how he used to get paid to change the tapes every two hours for a hospitle

  13. Mary Alice Thauvette says:

    This article was posted 23-Mar-01. That was nine years ago. It is time to update the article. Or, at least change the title of the last section from :1999 – Present” to “1999 – March 2001”

  14. what is the significances of the number like 8086 in the processor

  15. thanks for the notess

  16. amandu benard says:

    i love the notes they are precise and straight to key needed aspects thank you very much

  17. roger crouch says:

    This article lacks credibility. The first chip of the series was the 8080, then the 8085 was made (the 5 indicating it only needed +5v and ground instead of +-5 and +12) https://www.quora.com/What-is-the-difference-between-8085-and-8086 So the only true thing that can be said about the 8086 was that it was 16bit 8080 processor with improved IC features and more command set.

    • Mike Spooner says:

      From certain perspectives, the “first chip of the series” was the 4004 (1971), or pehaps the 8008 (1972), the 4040 (1974), 8080 (1974), or…

      In fact, the 8080 external interface was distinctly different from the 8086, in idea, not just width – for example, 8080 pin 21 (DMA acknowledge).

      The 8086 was (almost) binary compatible with the 8080 for “regular programs” ie: not ones that twiddled ports nor relied on specific interrupt/trap behaviour.

      So where do you draw the line? Where does Bob draw it? WHere does Fiona draw it? All in different places, I suspect.

      The author obviously chose to draw their line at the 8086, probably because delving back beyond the original IBM PC machines might not be worthwhile given a presumed intended audience…

Speak Your Mind