Editor’s Note 4/49/08 12:36 PM: OK, I’ll admit it. I got HAD by this article. It was based on an April’s Fools joke and made it’s way to PCMech well after April Fools. I’ll leave it up here since people have already commented on it. I’m not sure if Nathan (the author) knew it was a farce, but one thing is for sure: I need to pay much better attention when I’m publishing guest posts for PCMech. Sheesh…

–START OF THE ORIGINAL ARTICLE–

It has been announced that DirectX 11 will include a completely new type of graphics rendering called ray-tracing. Wait a minute. It’s not new. In fact, it’s been around since the 80’s. How come it took so long to be implemented for public use? How does it work? What advantages does it have over current-gen graphics? These questions are about to be answered.

Ray-Tracing

Ray-Tracing was first introduced in 1986, and is basically defined as tracing the paths of light as they interact with objects. This is essentially what our eyes do, so it creates quite a vivid and realistic picture. Unfortunately it wasn’t practical to use in everyday graphics because it took up so much raw power to compute. It was used scarcely in the 90’s, but only for demonstrations and now in the 21st century with multi-core technology it is finally possible to make Ray-Tracing practical.

So what happened? Well the movie-industry took advantage of it right off the bat. Many special effects were ray-traced to give a more realistic look. The movie Beowulf was entirely ray-traced. It wasn’t perfect, but it was damn close, and a heck of a lot better than what people have now. To give you an example of how much power it takes to ray-trace though, a person created a video of real-time ray tracing of a convertible on YouTube, and it takes the combined effort of THREE PS3 consoles. You can check it out here, it’s pretty cool. Remember each PS3 has 8 processors (6 active), so we are looking at over 20 processors for one, non-moving object.

Sony_PS3_sales_UK Hmm. This is starting to explain some things. Like Why Nvidia did not support DX10.1 on their 9-series cards, and not have any new technology on the cards other than smaller chip-sizes. They realized that the old ways of graphics are dying. What’s the point of Directx 10.1 anyway? Rasterisation, what Nvidia and ATI use, has reached its peak. They both have perfected the art of essentially faking graphics. Now it’s time for the real stuff. It’s an open field, and apparently Intel is planning on joining the competition. They recently have been experimenting with combining a processor with the graphics card with successful results. This could spell bad news for both ATI and Nvidia, but knowing the way Intel prices things I’m sure there will still be close competition.

An interesting thing about ray-tracing is that it is fairly scalable. With rasterisation, you notice less and less with each improvement. For example the new 8-core skull-trail beast from Intel hardly earns gamers a few FPS on rasterisation. For ray-tracing however, it will be exactly 8 times better than a single-core. So what will this do? Well there will probably be a new multi-core processor every couple months, possibly reaching over 100 before 2010. If each has implemented graphics with ray-tracing technology, you can see the benefit of that over getting a separate graphics card.

Benefits of Ray Tracing

Ray-tracedVsRasterized By now you probably want to see what ray-tracing can do compared to rasterizing. Well take a look at this image on the right. As you can plainly see, the ray-tracing image has more realistic reflections and shadows. Nvidia has worked their butts with their 3D shader processors, but they could never get anything close to this. It’s very encouraging to see the difference, but remember we are a ways off from getting objects of that clarity interactive on our computers. Directx 11 is only going to support a few limited things, so that the transition to ray-tracing is gradual, and not all at once. I won’t be surprised if Ray-Tracing Processing Units (RPU’s) are implemented on the Nvidia 10 series cards. At first maybe only characters are ray-traced. Then as new hardware is introduced, textures and objects within certain draw distance are ray-traced, until eventually everything as far as the eye can see is ray-traced and rasterisation becomes a thing of the past.

And This Matters Because…

Is this a good thing? Maybe. Everything would be a lot more predictable and you would be able to confidently tell which brand of graphics card is better just by looking at the data sheet, unlike today, where the only real way of telling which of two cards are better is by rigorously testing them in 3D programs, measuring their temperatures, calculating wattage, etc. So there will be two consequences. Either we will finally end the number game by being able to really tell what is what without any background info, or, more likely, it will simply enter the next stage of confusing the common public in return for profit.