Nothing like paying off the guys that copied your processor to begin with.
Intel's market dominance has been primarily due to position and total platform support. The last GOOD chipset AMD made in-house was the 768.
When they discontinued production, they eliminated themselves from being a serious competitor.
Options from VIA, SiS and ALI were JOKES compared to the solid solutions offered by Intel, and made AMD the choice for those who couldn't afford the better Intel solution. This is how AMD created the reputation of being the "cheap" CPU choice. Notice I didn't say value.
In 1998, Intel came out with a functional demonstration of Merced at COMDEX. It was a 64-bit server and workstation CPU. This was a completely different architecture, IA64, and was not backwards compatible with x86. It was later renamed to Itanium, and has NEVER done well in terms of high-end server sales. Its x86 incompatibility pretty much sealed its fate there.
Shortly thereafter, Intel's next foray into oddity was the adoption of the Netburst architecture. At the time, the move made sense. Die process scaling had somewhat plateaued and the idea that the operating frequency of the CPU's made on these dies was going to be capped due to this led to the extension of the processing pipe. Traditional x86 Processors, such as Intel's Pentium 3, used a 10-stage pipe. Intel extended the pipe to 20-stages (and then 31 with Willamette), which then generated significantly less heat, and allowed the processor to scale much more readily than its 8-stage or 10-stage counterparts. The caveat of this of course was that due to the added stages, the CPU was unable to perform the same amount of work per cycle. This is why a processor like the AMD Athlon was never directly frequency-comparable to a P4, as they operate differently. This of course led to the "processor naming" fiasco that both participated in. Since Intel was able to offer CPU's operating at a much higher frequency, this looked bad on AMD. Even though lower frequency AMD CPU's, which were VERY similar to Intel's P3 CPU's, were able to equal, and often surpass the P4 in performance, the numbers looked bad on paper, and uneducated consumers simply assumed the processor with the higher number beside it was faster.
This led to AMD adopting a processor speed rating scheme that had formerly been used by past competitor VIA with their CPU's. It was a "Performance Rating" or PR rating for the CPU, meant to give you the indication of performance relative to Intel's P4 CPU. This is where the "2200+" naming came from.
Intel of course figured they could play the game as well, and so they adopted and even MORE bizarre naming scheme who's numbers had zero basis in ANYTHING performance related.
Behind the scenes of course, Intel had continued to develop the P3 CPU. They had adopted it for use in laptops and had worked extensively on power management. As die process advancements had taken place at a rate much quicker than Intel had initially expected, this allowed this architecture, which initially began life as the "Pentium-M" to not only grow, but to prosper in the mobile arena. It also led to a complete platform development, known as Centrino. This is where AMD was not even remotely close to competitive with their "select" desktop CPU's that were fitted to laptops. Compounded by their complete absence of a platform solution, and AMD's notebook offerings were a joke for anybody who was serious about mobility.
During this time, AMD was able to come up with a 64-bit memory addressing extension for the x86 architecture. This removed the memory cap that existed, and allowed the execution of 64-bit code on x86, whilst still remaining completely backwards compatible with the traditional 32-bit x86 offerings. Since the CPU's did not require special software, and the performance was excellent, the Athlon64 took off. Even managing to garner a small corner of the value server market.
Intel's failure with the widespread adoption of Itanium, and subsequent lack of 64-bit support in the desktop market led to the next logical decision: License the address-extension technology from AMD. And they did.
Shortly thereafter, Intel migrated their mobile solution, now called "Core", which had gone through various incarnations at this point, including Dothan and Yonah, to the desktop arena. It was already multi-core, and was significantly faster than anything AMD had to offer. It also scaled VERY well, due to its mobile roots.
Intel's slowest quad-core offering, the Q6600 was faster than AMD FASTEST quad-core offering. And this is where the trouble began. AMD had been milking the (very successful) Athlon architecture for a very long time. Core took them by surprise. And Intel's resources allowed them to keep the train rolling and i7 followed shortly thereafter, widening the gap. And this is where we sit today.
Thankfully for AMD, purchasing ATI, the Canadian graphics manufacturer, put them in a position to start creating chipsets again. This will hopefully position them in a manner that they can be platform-competitive with Intel from a quality perspective. As their lack of solid chipsets has always been their biggest detriment.
*********************
On another note, this influx of cash is going to widen the already large gap between ATI and NVidia. The latter of which is in a bad way right now since they are not licensed to manufacture chipsets for Intel CPU's, and their former partner, AMD, is no longer interested in his past mistress, since the acquisition of ATI. Between low die yields and engineering FUBAR's, NVidia's future is not looking rosy.