1. The 2008 PC Builder's Bible
Find the best parts. Learn to build a rig from scratch and overclock it
to kingdom come. PC Gamer shows you how
Getting your hands dirty and building your own system is what separates PC gamers
from their console brethren, and it just so happens to be one of the most exciting parts
of our hobby. Unfortunately, it’s also a pretty daunting process for anyone who hasn’t
assembled his own rig from scratch. Thankfully, this guide not only gives you all you
need to know about every component that goes in your gaming PC, but also thoroughly
walks you through the entire building process with detailed instructions and helpful
We’ve always written the PC Builder’s Bible not only as a guide for new system builders
who want more versatility from their computer, but also for hardcore enthusiasts who
have to be on the cutting edge of technological innovation. And guess what, that
includes us as well. Every piece of hardware you’ll find recommended in this book is
something we would buy for ourselves. The specs of our custom rigs are actually the
same as the machines we’ve built for ourselves at home. That’s because we’re just like
you; we want the most bang for our proverbial buck. And with more money saved from
building a lean super-rig, you’ll have more money to spend on the awesome new games
to play on it!
Wrap your head around the various motherboard chipsets that create
the backbone of your gaming PC.
- Meet the latest motherboard technologies
- Buy the right motherboard in six easy steps
- Motherboards for Intel CPUs – LGA 775
- Motherboards for AMD CPUs – Socket AM2
Dual core or quad core? Intel’s Penryn or AMD’s Phenom? We give you
- Intel’s Penryn and three recommended Intel CPUs
- AMD’s Phenom and three recommended AMD CPUs
DDR2 or DDR3? Find out how your random access memory works with
answers to frequently asked questions.
- Choose the RAM that’s right for you
Behind every great gaming PC is a great video card. Follow our guide
when deciding your next GPU purchase.
- Videocard features to look for
- SLI and Crossfire analyzed
- Is DirectX 10 worth it?
- The best mid-range DirectX 10 cards
3. HARD DRIVES
The hard disk is a paradox – it’s both tiny and enormous at the same
time. We’ll help you wrap your head around the terabytes of data.
- Easy answers to common questions about your hard drive
- PC Gamer’s hard drive picks
CDs, DVDs, dual layer, Bluray… the optical drive scene is evolving
- Optical drives in a nutshell
- PC Gamer’s optical drive picks
SOUNDCARDS AND SPEAKERS
Nobody enjoys the sound of silence. Learn all there is to hear about the
latest audio technologies, and you’ll soon be basking in true surround
- Meet the latest technology for audiophiles
- Your speakers are sick. PC Gamer has the cure
We guide you through the ins and outs of a PC’s metal frame and
review two excellent high-class enclosures.
-Give your components a happy home
- PC Gamer’s recommended cases
In the case of PC gaming displays, screen size and display resolution
4. matter. From brilliant 24-inchers to dominating 30-inch high-definition monitors, you’ll
never see games the same way again. We tell you what to consider.
- Pick the perfect display
- Three recommended monitors
If you’re using a standard mouse and keyboard, you’ll never really have
a chance in the gaming world. If you’re serious about gaming, get some
- The best gaming keyboards
- The best gaming mice
- The best gaming accessories
Everything you need to know to build a smoking-fast, no-compromises, gaming PC
- Prices, parts, and lots of pictures to show you how it’s done
Learn how to wring every last drop of performance from your new rig
5. The importance of a good motherboard can’t be overemphasized. Every byte of data
your computer processes must pass between several components before it reaches
you, and the motherboard is the highway. The last thing you want is a metaphorical
traffic jam between CPU, RAM, and videocard when you’re trying to frag.
Should your next motherboard be BTX and support both SLI and DDR2? Don’t be
embarrassed if you don’t know the answer—our CliffsNotes primer on top-end mobo
technology will have you spouting geek-speak in less time than it takes to burn a DVD.
PCI Express has become a de-facto motherboard standard seemingly overnight,
despite the fact it hasn’t demonstrated much of a performance boost over the older AGP
standard (at least not in single-card configurations). PCI-E joins the trend of moving
away from wide, slow interfaces with lots of pins to narrow, high-speed interfaces. It
increases the available bandwidth for graphics from AGP’s 2GB/s to a whopping 8GB/s.
But PCI-E’s real graphics promise lies in its upstream bandwidth throughput: 4GB/s
compared with AGP’s 133MB/s.
6. For add-in cards, the standard x1 PCI-E connectors offer about 300MB/s second of
throughput—just about double that of a standard PCI slot. Considering the amount of
integration on today’s motherboards, however, few components really need to be
added. For this reason, we’ve not yet seen any real application for x1 cards; but that’s
likely to change as soon as software developers create applications that take advantage
The BTX motherboard formfactor moves the processor to the front of the case,
relocates the chipset to deliver higher I/O speed, and provides better component
cooling. Despite these advantages, BTX has been greeted with about as much
enthusiasm as turd casserole at a pot-luck. Much of the resistance springs from chassis
manufacturers, who are reluctant to spend $50K to retool their assembly lines. AMD,
meanwhile, has publicly stated it won’t embrace the standard unless customers demand
it. While we think BTX is a smart design improvement, it’s pretty much dead in the
water. You can safely stick with the tried and true ATX formfactor until the next
challenger comes along.
ATX 12V 2.0
PCI Express graphics cards can suck up to 75 watts of power, compared with AGP's
50-watt maximum. ATX 12v 2.01-compliant power supplies feature a 24-pin connector
that jacks into new PCI Express-capable motherboards. The good news is that you don't
necessarily have to buy a new PSU to run your new 24-pin mobo. Many motherboards
with a 24-pin connector are keyed to accept an older 20-pin PSU; the extra four pins are
simply left vacant. To make up for the lack of power, some new motherboards allow you
to supplement the mobo's main power by plugging in a second, four-pin connector.
NCQ and SATA 3Gb
SATA 3Gb is a pretty simple concept: Take SATA’s maximum transfer rate of 150MB/s,
double it to 300MB/s, and you get SATA 3Gb. Today’s hard drives don’t need the
throughput, but there’s no reason not to have it on a new motherboard. Native
command queuing is probably more important. NCQ enables a hard drive and its
controller to intelligently reorder data requests, so the combo can scoop up and write
7. data faster. Although we’ve seen only small performance boosts from NCQ so far, it’s a
good idea to have it on whatever motherboard you choose.
High-Definition Audio bumps maximum audio resolution from AC-97’s 20 bits up to 32
bits, while sampling rates are boosted from AC-97’s 48kHz max up to 192kHz. HD
Audio supports up to eight analog channels, where AC-97 supported only six. PCs
outfitted with HD Audio will also support a host of Dolby technologies, including Dolby
Headphone, Dolby Virtual Speaker, Dolby Digital Live, and Dolby Pro Logic IIx. Dolby
Pro Logic IIx might be the most interesting. This technology can encode a stereo or 5.1-
channel audio stream—including game audio—into 6.1 or even 7.1 channels in real
So what’s the catch? Most audio experts we’ve talked to contend that it will be all but
impossible for HD Audio to match the fidelity of even a three-year-old PCI soundcard
because of all the electrical noise motherboards generate.
8. Before you buy a motherboard, you must first decide if you’re going to recycle your old
CPU or upgrade to something new. If you’re keeping your old proc, make sure it will
work with your new mobo. If you’re going new, will it be AMD’s Quad FX uber chip[s],
Intel’s blazing Core 2 Extreme QX6800 with a whopping 8mb of onboard cache, or
something in between?
Choosing a core-logic chipset is as important as your CPU choice. Intel, NVIDIA, and
VIA all make excellent chipsets for the mid and high end processors.
Now it’s time to decide which features you want on your mobo. In the old days (well, if
you consider 1999 the old days), motherboards were about as stripped as a Chevy
Impala left parked on a Bronx side street. These days, motherboards come with
everything you need, save a videocard, CPU, and RAM. What are you looking for? Dual
Gigabit Ethernet? HD Audio? Enough SATA ports to feed a rack of hard drives? Make
Once you find a motherboard that tickles your fancy, read the owner’s
manual before you plop down your dough. Most motherboard vendors offer their
manuals as free downloadable PDFs on their websites. The manual will reveal any of
the board’s limitations (such as the types of memory and CPUs it supports), and it will
let you know if a PSU upgrade is necessary.
If the motherboard has
been out for a few months, visit the forums on the manufacturer’s website and see what
9. buyers are saying. But remember to keep everything in perspective: People don’t go to
the forums to wax poetic about their AM2 board, they go there to bitch. It’s all but
impossible to determine if the person complaining is a fried customer or one of the
manufacturer’s competitors looking to sow fear, uncertainty, and doubt. Always take
forum comments with a grain of salt, but if you see a pattern emerging, it could be a
It’s not at all uncommon for motherboard manufacturers to revise their designs without
going so far as to introduce an entirely new model. Newer revisions are almost always
better than older boards, so try to purchase the latest version of the motherboard that’s
available. You’ll find the rev numbers silk-screened on the board.
nForce 780i SLI 775 A1
Chipset: NVIDIA nForce 780i
CPU Support: Intel Pentium, Pentium EE, Core 2 Duo, Core 2 Quad, Core 2 Extreme
Memory Support: DDR2 533/800/1066/1333 MHz
10. PCI Slots: 3 PCIe x16, 1 PCIe x1, 2 PCI
Notable features: Triple SLI support, 6 SATA ports with support for RAID 0, RAID 0+1,
RAID 5; integrated 7.1 channel audio, 10 USB 2.0 ports (6 external, 4 internal), 2
Firewire ports (1 external, 1 internal)
Chipset: Intel X48
CPU Support: Intel Pentium, Pentium EE, Core 2 Duo, Core 2 Quad, Core 2 Extreme
Memory Support: DDR2 667/800/1066/1200 MHz
PCI Slots: 2 PCIe x16, 3 PCIe x1, 2 PCI
Notable features: Crossfi re support, 6 SATA 3Gb/s ports, Dual Gigabit LAN
controllers, SupremeFX II Audio Card, 12 USB ports (6 external, 6 internal), 2 Firewire
ports (1 external, 1 internal), External LCD post device
15. CPUs are tricky beasts. There was a time when simply looking at the number of
megahertz on a chip was a surefire indication of how well it would perform, but sadly
that just isn’t the case any longer. With Intel and AMD at each other’s throats for the
biggest piece of the market, their approaches to the technology have taken different
paths. Considering the new 64-bit and quad-core chips available now, does clock speed
even mean anything?
The question of whether or not to upgrade to a multi-core processor has long been
settled—you definitely want the advantages a multi-core proc gives you. You get a
payoff right away with extra breathing room for background applications to run without
dragging your gaming performance down to a crawl, and then you’ll get another payoff
in the future, as more and more games optimized for multi-core processors (like
Company of Heroes: Opposing Fronts and Crysis) hit the shelves. And with quad-core
procs from Intel hovering at around the same price as slightly higher-clocked dual-core
chips, going quad core is a little bit of future-proofing that you can’t afford to miss.
Of course, the Intel-versus-AMD debate rages on. AMD has rolled out its “Phenom”
series of multi-core processors, along with an announcement that it’s bringing “true
quad-core” to the desktop (with the intention of scaring the crap out of potential Intel
shoppers). What that refers to is that quad-core Phenom processors have four
individual cores on a single piece of silicon, whereas Intel’s quad-core procs are really
two dual-core procs stuck together. Does this fundamental design difference matter?
We’ll give you the answers.
16. Q: What exactly is a Penryn?
A: Penryn is the “family” name for Intel’s follow-up to its 65nm Core 2-lineage CPUs.
For consumers, Wolfdale will be the dual-core Penryn, Yorkfield will be the quad-core
version, and Harpertown will be the quad-core Xeon workstation CPU.
The big enhancement is the process shrink from 65nm to 45nm. Intel calls its move to a
45nm process the “biggest change to computer chips in 40 years.” Intel’s tendency
toward self-aggrandizement aside, the 45nm process is a significant jump forward,
allowing twice as many transistors to fit in the space of a 65nm chip. The 45nm process
also uses high-k gate dialectics. Not to be confused with L. Ron Hubbard’s Dianetics,
the high-k gate using hafnium oxide replaces the silicon dioxide gate that’s been in use
since the 1960s. The new transistor leaks less energy, produces less heat, and is able
to switch faster than a silicon dioxide transistor by 20 percent. This boils down to
smaller, faster, more power-efficient CPU cores. How much smaller? The previous Core
2 Extreme quad cores packed 582 million transistors within a space of 286mm2. The
Yorkfield quad core packs 820 million transistors into 214mm2.
Q: So what else is new under the hood?
17. A: Penryn is more than a simple die shrink. The new CPUs are based on the Core 2
microarchitecture with a few tweaks that Intel hopes will keep it ahead of AMD. The
headliner of these tweaks is the new SSE4 instruction set designed for media encoding
and high-performance computing. Also new is a Super Shuffle Engine, which increases
the speed of many SSE media-encoding instructions by doubling the processing units
from 64-bit to 128-bit.
Penryn also includes a new Fast Radix-16 Divider that pretty much doubles the division
math speed. Intel also reportedly boosted virtual machine performance by as much as
25 to 75 percent. And Intel added a new feature called Dynamic Acceleration
Technology that essentially overclocks one of the cores when the others are sleeping.
The new chip also makes use of all the physical space freed up by the die shrink.
(Imagine if all the stuff in your garage shrunk by 50 percent!) That’s what accounts for
the beefed up L2 cache, which at 6MB per core is a 50 percent increase over the L2 in
65nm quad cores. The larger L2 cache helps in numerous ways, but its biggest
contribution is in ameliorating the potential performance hit caused by the ancient
shared front-side bus architecture Intel uses for communication between cores. To keep
the front-side bus from bogging down, the large and very efficient L2 cache ensures that
the CPU has ample data close at hand so it won’t be data starved. While Intel has
certainly proved that the FSB strategy is still workable, the company has stated it plans
to adopt an on-die memory controller in its next CPU.
Q: How significant is the new SSE4 instruction set?
A: Instruction sets in CPUs always garner the most attention but, sadly, are usually the
last feature to actually add performance benefits. While the Fast Radix-16 Divider and
the Super Shuffle Engine in Penryn will increase the performance on many existing
applications, the 47 new instructions in SSE4 will not give you any performance boost
until applications directly
support them. SSE4’s main claim to fame will be in media encoding and high
performance computing (i.e., supercomputers). In fact, Intel’s demonstrations of SSE4-
enabled encoders showed incredible performance boosts.
18. However, those demonstrations have been called into question, with skeptics
suggesting that while the alpha build of DivX used for the proof-of concept benchmarks
is faster with SSE4, it’s not a realistic scenario. One developer we spoke with told us:
“The applicability of SSE4 for our codecs seems rather limited and the expected gain
seems rather small (I expect no more than a 1- to 2-percent speed gain with SSE4)
compared to the speed increment we got from SSE on pre-Core 2 Duo and SSE2 on
Core 2 Duo. The SSE4-instructions that are often advertised as being especially
targeted for video encoding are useless for us, since those instructions are only
applicable for exhaustive search algorithm (ESA), which we don’t use because of its
Q: Is Penryn faster than the current Core 2 quad cores?
A: We don’t want to give away the punch line but, generally, an equivalent Penryn runs
up to 14 percent faster when compared clock-for-clock with the current Core 2 quads.
The exact speed increase depends on the benchmark. In some, you’ll see no change in
performance; in others, a healthy increase is possible. But remember, Penryn isn’t the
big leap forward. Intel’s CPU schedule dictates a little jump one year and then a big
jump the next year. This is the little jump. Intel hopes to make a big jump when it
introduces its Nehalem CPU in late 2008.
Q: Will Penryn work in my motherboard?
A: Long-time Intel lovers have been vexed by this for years, as the company’s been in
the habit of invalidating perfectly good motherboards by requiring new or updated
chipsets to run its latest CPUs. Want a 1,066MHz P4 on a 925X mobo? Sorry, you need
a 925XE. Pentium D on a 925XE? Nope, you need a 955X chipset. Pentium 955 EE on
a 955X? Guess again: 975X.
Fortunately, Intel has gotten a little better in this area, and there is a very good chance
that a QX9650 will work in many existing motherboards. Certainly motherboards that
use Intel’s P35 and X38 chipsets will support the new CPU (although a BIOS update
might be required). Some Intel 965 and 975X boards might also work with the new CPU
and we understand that the majority of 680i boards will be compatible. To be safe,
however, before you buy any board/CPU
combination, check the manufacturer’s website to see what processors it has validated
19. with the design. Just because the Yorkfield and Wolfdale are LGA775 doesn’t mean
they’ll work in the board of your fancy.
Above: Intel’s 45nm die shrink allows engineers to pack nearly twice the number
of transistors into the same space as a 65nm CPU
Three recommended Intel CPUS
QX9650 Core 2 Extreme
Yorkfi eld 3.0GHz 12MB L2 Cache LGA 775
21. E8400 Core 2 Duo
Wolfdale 3.0GHz 6MB L2 Cache LGA 775
Q: How do you pronounce Phenom?
A: It’s fee-nom, not fuh-nom.
Q: What advances does Phenom offer?
A: Phenom is AMD’s first quad-core processor and is touted as a “true quad core.”
Based on a 65nm process, Phenom uses an enhanced version of the stellar K8 Athlon
64 core, which features many of the same “wider and faster” techniques as Intel’s Core
2 Duo. Improvements over the Athlon 64 include the ability to execute SSE instructions
in 128-bit chunks versus 64-bit. Cache speed gets a bump, as well, with L1 going from
16 bytes per cycle to 32 bytes per cycle, and L2 going from 64 bits per cycle to 128 bits.
AMD also spends silicon on increased floating-point performance; a few new
22. instructions; HyperTransport 3, which nearly quadruples the bandwidth over previous
implementations; and more L3 cache.
Q: What’s meant by “true quad core”?
A: Each Phenom features four execution cores on one single, contiguous die.
Architecturally, it’s far more elegant than Intel’s quad core, which fuses two dual-core
chips in a CPU and forces the dual-core islands to talk to each other over the front-side
bus. Phenom was designed from the get-go as a quad chip, and each core
communicates at HyperTransport 3 speeds—several orders of magnitude faster than
Intel’s front-side bus. All the cores can also share data stored in the L3 cache, so a core
would have to reach out only to the L3 instead of the much slower system RAM in
certain applications. This adds up to a chip that, on paper, seems to at least equal—if
not exceed—Intel’s Core microarchitecture.
Q: Will Phenom work in my existing motherboard?
A: Phenom is designed as a Socket AM2/Socket AM2+ chip and should, therefore, drop
right into the majority of existing motherboards, provided the motherboard maker
updates the BIOS—and didn’t screw up on the board design.
Q: Does Phenom have the same RAM issues that DDR2 Athlon 64s did?
A: No. AMD corrected the issue that limited the DDR2 Athlon 64s to whole number
RAM divisors. This, in essence, would force DDR2/800 RAM to run at DDR2/766.
Phenom CPUs use a separate clock for the memory controller, so memory will run at its
intended speed. Consequently, however, the memory controller no longer runs at the
core’s speed. The memory controller on the 2.6GHz Athlon 64 FX-60 runs at 2.6GHz.
On the 2.6GHz Phenom 9900, the memory controller runs at 2GHz and notches down
to 1.8GHz for the 2.3GHz Phenom 9600. It’s not clear if or how this impacts memory
performance; it’s still a good clip faster than what the memory controller runs at in
competing Intel machines, where that part is located in the north bridge.
Q: How well does Phenom overclock?
A: It will vary from chip to chip, of course, but Phenom is not shaping up to be a great
overclocker today. We didn’t get very far with our engineering sample chip and few
other reviewers have either. And when you look at how the thermals ramp up for
relatively minor speed increases, it’s no wonder. Going from 2.3GHz to 2.4GHz takes
23. the thermals from 95 watts to 125 watts. Going from 2.4GHz to 2.6GHz jumps it up to
140 watts. Older AMD and many Intel enthusiast parts have high thermal ratings but
only because they’re anticipating users to overclock the hell out of them. We suspect
that the increased thermals for the two faster Phenom parts are more related to AMD’s
issue at the fab.
Above: AMD’s “true quad core” jams all four cores onto a single 65nm, 285mm2
Q: What’s the deal with AMD’s tri core?
A: The tri core is being sold on the concept that if two is good and four is great, three is
a perfectly attractive middle option. AMD’s tri core is primarily aimed at people who
don’t want to pay for quad core but want some additional performance at a more
affordable price. The CPUs are, as you might suspect, dies that won’t pass muster as
quad cores but work fine with one core turned off. While some view this as selling
defective chips, AMD says it’s business as usual. In the past, if a portion of a CPU’s
1MB L2 was bad, it could be sold as a chip with 512KB or 128KB L2, with the offending
24. portion turned off. Like the higher-clocked Phenoms, the tri cores won’t be out until later
in the year—they will carry model designators of 7 instead of 9. Since they’re the same
chip as a quad core but with one core turned off, you can expect performance to fall in
between their quad- and dual-core brethren.
Q: Where does AMD go from here?
A: AMD’s next stop is 45nm, which it says will be online at the end of this year. There’s
likely to be a shrink of the Phenom core with some enhancements to get the
performance up, but AMD’s CPU code-named Bulldozer will be the next chip to truly
take on Intel. Bulldozer, which is due in 2009, will be a multicore design, but AMD hasn’t
revealed very many specifics. The problem for AMD is that Intel is expected to make
another jump forward with its chip code-named Nehalem, which will adopt AMD’s on-die
memory controller and chip-to-chip communication techniques and feature four cores
per die and an improved version of HyperThreading. With two quad-cores glued
together under the heat spreader, a Nehalem would have up to 16 cores (eight real,
eight virtual) available to the OS.
Above: AMD’s AM2 Socket
Three recommended AMD CPUs
27. RAM stands for “random access memory.” Your computer uses RAM as a temporary
workspace. The CPU transfers data and applications from long-term storage devices
(your hard drive and optical drive) into RAM, then runs the programs and accesses data
from memory. New data is created within your system memory before it’s ever saved to
a storage device. Every byte of information used by a PC during its operation flows
through RAM on its way to or from an I/O device, the CPU, or a storage device. Access
to data in RAM is immediate: The CPU can read or write to any location in memory
without having to muddle through the adjoining data.
Most RAM used in PCs today is dynamic RAM, or DRAM. It’s called “dynamic” because
the memory chips must receive new electrical charges (a process known as memory
refreshing) thousands of times a second, or the data stored in the chips is lost. This is
why information saved only in RAM is lost as soon as your PC is restarted or turned off.
28. RAM and Paging Files
If a program or data file is too large to completely reside in RAM, PCs use dedicated
areas of the hard disk to store the overflow. This dedicated disk space is known as
“virtual memory.” The paging file (swapfile) in Windows is an example of virtual memory.
Windows uses the paging file as a holding tank for information being transferred in and
out of your system RAM. The less RAM you have, the more frequently your paging file
is used. Although a paging file enables a system with a relatively small amount of
memory to work with files that exceed the amount of available physical memory, using
the paging file instead of physical memory has a huge negative impact on performance.
Hard drives move data an order of magnitude slower than even the slowest RAM. This
means that the more memory you add to your system, the greater the number of
programs you can run, and your system can work with larger files before resorting to the
paging file. In an ideal situation, the paging file would never be used. In practical terms,
you want to install enough memory to handle the largest amount of work (or play) your
PC performs on a routine basis.
29. Q: What about DDR3?
A: DDR2 (double data-rate 2) is the standard memory for all Intel and AMD desktop
computer systems today. However, we should see an expanded push for DDR3 RAM
this year. The new memory spec promises higher bandwidth but at the cost of higher
latencies. In late 2007, this compromise along with its higher prices made DDR3 seem
pretty irrelevant. But we have seen one promise from DDR3 – really high clock speeds.
DDR3 modules are already pushing 1,800MHz whereas DDR2 topped out at 1,066MHz.
As clock speeds increase, the latency becomes less of an issue. Combined with the
higher front-side bus speeds of Intel’s 45nm Penryn CPU, we think DDR3 is starting to
show some promise. With that said, you can’t lose with DDR2 and it’s pretty darned
Q: What is the significance of the numbers listed after the model number for a
module, such as 3-4-4-8?
A: The first number is the CAS latency (CL), which is the number of clock cycles
between the time a read command is sent and the data is available. The second
number is the tRCD (row address to column address delay), which is the number of
clock cycles between the active command and the read or write command. The third
number is the tRP (row precharge time), which is the number of clock cycles between a
precharge command and the active command. The fourth number is the tRAS (row
active time), the number of clock cycles between a bank active command and a bank
30. recharge command. The standard values for a memory module are stored in its SPD
(serial presence detect) chip, and are used by the BIOS when you select “By SPD” or
“Auto” for memory timings.
Q: Can I change these values?
A: Most systems permit you to manipulate memory timings. Reducing the tRCD and
tRP values can improve memory performance, although you might need to increase the
CL value a bit to maintain stability.
Q: When I add memory to my system, what are the most important specs to look
A: SPEED - (PC or PC2 rating). This should be the same or faster than your existing
SIZE - For a single-channel system, buy the largest (in MB) module you can afford. For
a dual-channel system, buy a matched set of modules providing the total size you need.
For example, two GB modules will run faster than a 2GB module on a dual-channel
system. If you’re upgrading a laptop, you usually only have one memory slot, so fill it
with the biggest module available.
TIMINGS - If you’re a hardcore gamer, you’ll probably want to overclock your memory.
Look for low-latency memory, and remember to consider all the numbers, not just the
31. Warning: If you’re serious about building a game machine, do not skimp on your video
card! All the latest graphically-intense games — like Supreme Commander, Unreal
Tournament III, and Crysis, just to name a few — look absolutely incredible will the
resolution, anti-aliasing, and detail settings cranked up to the max. This is truly the way
it was meant to be played.
But you won’t even get close to realizing this gaming dream without investing a serious
slice of your budget in a monster video card. There is no sight on Earth sadder to a
gamer’s eye than seeing a potentially beautiful game reduced to minimal graphics
settings and resolution, and still chugging along with a low frame rate. Don’t let this
happen to you.
The graphics card is the single biggest factor (though not the only factor) in determining
how fast your computer will be able to run the latest frag fest or grand strategy game.
Choosing last year’s card will earn you some pretty chunky frame rates, and that simply
Although there used to be two separate types of videocards — 2D cards for desktop
work and 3D cards for games — today’s videocards do everything in one sexy silicon
package. And over the years, as games have become increasingly complex and more
lifelike, videocard development has accelerated, rapidly bringing Finding Nemo-quality
graphics on your desktop closer and closer to reality. While that day is still a ways out,
modern videocards are technological wonders that are just as complex (and just as
expensive, unfortunately) as some high-end CPUs.
32. Above: NVIDIA’s 8800GT
The consumer videocard market is currently dominated by just two companies: ATI and
NVIDIA. Today, DirectX 10 cards like NVIDIA’s 9800 line pretty much trounces the
competition in performance, with tough competition from ATI’s brand-new cards based
on their RV670 chip. And with NVIDIA’s SLI or ATI’s Crossfire technology, which allows
you to run two high-powered cards in tandem for a huge bump in performance, game
graphics are experiencing an unprecedented boost in hardware power.
With that in mind, this may be the most important section of this article. We’ll help you
find the right card, starting by answering some frequently asked questions.
Q: Are onboard graphics really that bad? Are integrated graphics any good? Can
they run a game like Assassin’s Creed?
A: Integrated graphics—that is, graphics that are built directly into a motherboard—are
designed to provide minimal 3D performance in exchange for reduced cost. They’re not
designed for gaming, but rather simple 2D desktop work. As such, anyone serious
about gaming should never consider using integrated graphics.
Q: If I buy a top-of-the-line videocard today, how long will it be a viable solution
for good gaming?
A: In general, a high-end videocard should be extremely capable for at least a year, and
probably longer depending on what kind of frame rates you demand and the kind of
33. high-end features you’d like to be able to enable. There are games out today, for
example, that run just fine on 3-year-old cards, but that’s typically because the 3D
engine used in those games came out at roughly the same time as the card. Play a
brand-new game with a modern engine on the same card, however, and it’ll probably
run like a slide show, if at all.
Above: Assassin’s Creed in DirectX 10
While the videocard industry generally relies on a six-month refresh cycle for all of its
cards (meaning that you’ll usually see new cards from both ATI and NVIDIA twice each
year), the game industry moves at a much slower pace. All 3D games run on their own
“engine” — a massive pile of code that, among other things, determines the visual
quality of the game you eventually see on your screen. These engines take years to
develop, and are as forward-looking as possible, meaning they are designed to run on
hardware that won’t even exist until several years down the road! As a result, many
brand-new engines/games are brutal on PC hardware when they’re first released. But
over time, hardware catches up and eventually surpasses the 3D engine’s capabilities.
A classic example of that is id software’s Quake III. When it was first released several
years ago, nothing but the most high-end card could run the game at a constant 30
frames per second. Today, the latest hardware runs that same engine at several
hundred frames per second. And a few years from today, new, yet-imagined video
cards should churn through Quake 4 and Oblivion in the same way!
34. The boxes videocards come in are filled with mumbo-jumbo touting often obscure
features and wildly out-of-context performance numbers. Here are the key features that
DirectX 10: The most important thing to know about DX10 is that both AMD and
NVIDIA GPUs that support it feature a unified architecture. This means that any or all of
the processor’s computational units (aka stream processors) can be dedicated to
executing any type of shader instruction, be it vertex, pixel, or geometry. This means
DX10 compatibility is a desirable feature even if you don’t plan on running Vista.
Memory Interface: In theory, a GPU with a 512-bit interface to memory will perform
faster than one with a 256-bit memory interface. But don’t be confused by AMD’s 512-
bit “ring bus” memory. That architecture is 512-bits wide internally, but only its high-end
GPUs have a true 512-bit memory interface; the company’s lesser components have
only 128- and 256-bit paths to memory. Inside the GPU, AMD’s “ring bus” architecture is
35. 512 bits wide across the board. But don’t judge a card based solely on its memory
interface. NVIDIA’s 8800 GTX and 8800 Ultra are considerably faster than AMD’s ATI
Radeon HD 2900 XT despite those GPUs having a much narrower 384-bit memory
Stream Processors: Unlike CPUs, which have one to four processing cores on a single
die, modern GPUs consist of dozens of computational units known as stream
processors. As with the GPU’s memory interface, however, simply counting the number
of stream processors doesn’t necessarily indicate that one videocard is more powerful
than another. AMD’s ATI Radeon HD 2900 XT, for example, is much slower than
NVIDIA’s GeForce 8800 GTX despite the fact that the latter part has only 128 stream
processors to the former’s 320.
HDMI: If you purchased a new big-screen TV, it’s probably outfitted with an HDMI port,
either instead of or in addition to a DVI port. The big difference is that HDMI is capable
of receiving both digital video and digital audio over the same cable. Videocards based
on AMD’s new GPUs are capable of taking audio from the motherboard and sending it
out through an HDMI adapter that connects to the card’s DVI port. With an NVIDIA card,
audio must be routed to your display or A/V receiver over a separate cable.
HDCP: This acronym refers to the copy-protection scheme deployed in commercial Blu-
ray and HD DVD movies. In order to transmit the audio and video material on these
discs to your display in the digital domain, both the videocard and the display must be
outfitted with an HDCP decryption ROM. This copy protection is not currently enforced if
the signal is transmitted in the analog domain. (See also Dual-Link DVI)
DUAL-LINK DVI: Driving a 30-inch LCD at its native resolution of 2560x1600 requires a
videocard with Dual-Link DVI, which is relatively common in mid-range and high-end
products. What’s not so common is a videocard that supports HDCP on Dual-Link DVI;
without that feature, the maximum resolution at which you can watch Blu-ray and HD
DVD movies is 1280x800.
36. Above: Note the proprietary VIVO port next to the DVI ports on this NVIDIA
VIVO: The acronym stands for video in/video out—analog video, that is. Most
videocards are capable of producing, in order of quality, composite, S-, or component-
video that renders them friendly to analog TVs. Support for these types of video input—
which useful primarily for capturing analog video from VCRs and older camcorders—is
much less common.
BLU-RAY And HD DVD Support: As backward as it sounds, high-end videocards are
less capable than mid-range videocards when it comes to decoding the high-resolution
video streams (H.264, VC1, and MPEG-2) recorded on commercial Blu-ray and HD
DVD movies. AMD’s ATI Radeon HD 2600 XT and the upcoming RV670 fully offload
the decode chores from the host CPU; the ATI Radeon HD 2900 XT do not. On the
NVIDIA side, the GeForce 8600 GTS and the 8800 GT do, but the 8800 GTS, 8800
GTX, and 8800 Ultra do not.
If one videocard can churn out 30 frames per second, two in the same machine should
be able to pump 60fps, right? Well, not exactly. Assuming your PC is even capable of
running more than one GPU at the same time, the best performance bump you can look
forward to is about 80 percent in a dual-GPU configuration. Very high-end GPUs scale
much less effectively.
37. The point is moot, of course, if your motherboard doesn’t support running two or more
videocards simultaneously—and that means more than simply having a mobo with two
or more PCI Express slots. Running multiple AMD ATI Radeon videocards, for instance,
requires a CrossFire compatible motherboard. Doing the same with two GeForce cards
requires an SLI-compatible motherboard (the acronym stands for scalable link
Right: NVIDIA’s 9800GX2 in SLI mode is technically 4 GPUS on one
It’s understandable that you can’t chain AMD and NVIDIA videocards together—the
architectures are radically different—but there’s no good reason why you can’t mix and
match videocards and motherboards. HP, in fact, recently figured out how to do just that
with its Blackbird 002 gaming PC (which can be outfitted with two Radeon HD X2900
XT videocards in CrossFire on a motherboard with an NVIDIA SLI chipset).
Unfortunately, HP isn’t sharing this firmware/driver trick with the rest of us.
Looking on the bright side, both companies support both AMD and Intel CPUs; gaining
access to SLI, however, requires a motherboard with an NVIDIA chipset. CrossFire
support is available with both AMD and Intel chipsets. Both companies’ technologies
also require that the GPUs on each videocard be identical, although they don’t
necessarily need to have the same clock speeds or even the same-size frame buffers.
You can couple an NVIDIA GeForce 8800 GTS with a 640MB frame buffer to a
GeForce 8800 GTS with a 320MB frame buffer, for instance, but you can’t pair either of
those cards with a GeForce 8800 GTX.
TRIPLE AND QUAD GPUS
NVIDIA launched quad-SLI technology some time ago, but the solution failed to gain
much traction in the market: It didn’t scale particularly well, it was wickedly expensive,
and it was available only in pre-built systems from OEMs. The solution featured four
GeForce 7900 GPUs mounted on four PCBs that fit into two PCI Express slots on the
38. motherboard. NVIDIA never announced a similar solution for its 8-series products; and
as we went to press, there were still no Vista drivers available for those rigs.
In the wake of Ageia shipping its PhysX physics accelerator last year, both AMD and
NVIDIA made a great deal of noise about doing physics acceleration on the GPU.
Despite several technology demos, in which a third videocard was used to accelerate
physics, this initiative also failed to get off the ground. Now that NVIDIA has acquired
Ageia and its technology, its next-generation cards may feature a built-in PhysX
processor so you won’t have to buy a separate add-in card.
AMD recently announced CrossFireX technology, which will enable three and four
videocards to operate in a single motherboard (one with three PCI Express slots,
obviously), and NVIDIA was making noises about the same thing with SLI. As with
NVIDIA’s quad SLI, all three (or four) GPUs will be used to produce graphics. NVIDIA’s
new 780 and 790 nForce motherboards all support triple-SLI. With very large monitors
becoming increasingly less expensive, gamers need all the graphics horsepower they
can lay their hands on.
We thought DirectX 10 would be the one reason to consider holding our noses and
upgrading to Vista. While there’s no reason why Microsoft could not release DirectX 10
for Windows XP, the company has so far insisted on keeping DX10 and Shader Model
4.0 exclusive to their new OS.
The new API gives game developers the tools to dramatically increase the visual
complexity of their games. However, from what we’ve seen of DX10 games so far there
are too few compelling reasons to justify abandoning XP right now for anyone that does
not fall into the “must early adopt” category. Vista’s slow adoption rate is one reason
why developers have been reluctant to move to it. Valve recently released statistics
39. culled from its Steam gaming service that revealed only three percent of its one million
anonymous users had machines equipped with both a DX10- compatible videocard and
“[Microsoft’s] decision to couple DX10 with Vista was a mistake,” said Valve’s director of
marketing, Doug Lombardi. “There is no difference between running Orange Box games
[Half-Life 2: Episode 2, Team Fortress 2, and Portal] on Vista versus XP, but there are
some benefits to having a DX10 GPU.”
But this is more than just a chicken-or-the-egg problem. DX10 and Shader Model 4.0
are also more complex to program that DX9 and SM 3.0, and most of the games that
shipped last year were far along in their development cycles when Microsoft made
these new tools available.
Lombardi, for example, told us that Valve’s developers do make use of the unified
architecture that’s unique to DX10-class GPUs in order to deliver more sophisticated
facial animation in Team Fortress 2, but you don’t need Vista for this because they
didn’t tap DX10 or SM 4.0.
The few games we’ve seen that do make use of DX10 (both new games and previously
released games with DX10 patches) don’t look significantly better running under Vista
than they do with Windows XP. But what’s worse is that they run slower on Vista. When
we patched the RTS game Company of Heroes and ran it at 1920x1200 resolution in
Windows XP (using an EVGA GeForce 8800 GTS with 640MB of memory), we
achieved a playable 42.3 frames per second. When we played the same game on the
same machine using Vista, frame rate plummeted to a creaky 20.2 frames per second.
It would be one thing if the trade-off resulted in supremely better graphics, but we
couldn’t see any significant differences. We had a similar experience with World in
40. Microsoft’s recent announcement of DirectX 10.1 and Shader Model 4.1 have rendered
the situation even more complex. These new versions were released along with Vista
Service Pack 1, but they’re supported only by AMD’s and NVIDIA’s very newest GPUs
(we’re talking about the G92 and the RV670). So if you thought buying any Radeon
2000-series or any GeForce 8000-series card rendered you future-proof, you’re in for a
Microsoft, of course, insists these updates don’t render these cards obsolete. “The
updated API,” said Microsoft’s Sam Glassenberg, lead DX10.1 programmer, “provides
full support for all existing Direct3D 10 hardware and upcoming hardware that supports
the extended feature set.
The API is a strict superset. No hardware support has been removed in DirectX 10.1”
The new API renders mandatory several features that were previously optional.
Compliant GPUs must now support at least 4x AA and 32-bit floating-point filtering, for
Considering how slowly both consumers and developers are moving to Vista, we don’t
anticipate the point releases of these new tools to have much of an impact on the
41. Now that they’ve cashed in on the early adopters, ATI and NVIDIA are going after the
rest of us, with fast and inexpensive cards that do DirectX 10 – and beyond!
GeForce 8800 GTS 512MB $240, www.NVIDIA.com
Yet another variation on NVIDIA’s winning GeForce 8 series, the GTS 512MB is an
excellent card whose terrific performance is simply overshadowed by the
price/performance ratio of the 8800 GT. It’s a hefty card in the dual-slot form-factor of
the 9800GX2 and 9800 GTX, but with a modest bump up in the core clock and only
512MB of texture memory, instead of 768MB (as well as a slightly narrower 256-bit pipe
to squeeze frames through, compared to the 384-bit pipe of its two older brothers).
The extra forty bucks it costs over the cheaper 8800 GT doesn’t go to waste. Although
the gains seem modest in the benchmark chart below—a few extra frames per second
here, an extra 20 frames per second there—the differences become more prominent at
higher resolutions and with higher levels of postprocessing (including filtering and
42. GeForce 8800 GT 512MB $200, www.NVIDIA.com
If you’ve got the bucks, then by all means, get yourself a GeForce 9800 GTX or a
9800GX2. But most of us are forced to cut corners every now and then on our PC
upgrading budget. That’s what makes NVIDIA’s GeForce 8800 GT such a winner of a
card: though it costs a lot less than the high end, and a hundred bucks less than the
8800 GTS 512MB version, its performance hardly plays like a second-tier card.
You can argue on and on about whether memory size, number of stream processors, or
clock speed are more important in a videocard, but in the end, it’s the balance of all
three that matters, and right now, you won’t find a more finely tuned card than the 8800
GT. It’s the one card we can recommend without reservation to gamers of all levels and
43. Radeon HD 3870 $170, www.ati.com
ATI now has a very serious competitor to the cards NVIDIA’s been dealing out to mid-
range gamers in the HD 3870, a dual-slot card with a high 775MHz core clock speed
and the 512MB of memory that a mid-range videocard deserves. It clocks in virtually
neck-and-neck in Crysis and faster in Half-Life 2: Episode One compared to the
GeForce 8800 GT, but gets winded and lags behind in RTS games like World in Conflict
and Company of Heroes. Even then, we’re talking about differences of 10 to 15 frames
per second, which looks less harsh when you consider that the MSRP on the HD 3870
is merely $170.
It doesn’t seem as finely tuned as the 8800 GT, and won’t appeal to as many different
levels of gamers as that card does, but it’s fast, inexpensive, DirectX 10.1–compatible
like the 3850—and well worth the extra $40 over that card. Plus, you can pair two of
them up for some awesome Crossfire action!
WINDOWS XP / WINDOWS VISTA
3DMark06 run at default resolution of 1280x1024; all other benchmarks run at
1600x1200 with 4x full-screen antialiasing and 16x anisotropic filtering enabled.
Name: GeForce 8800 GT
44. Half-Life 2: Episode One: 129/131
World in Conflict: 33/28
Company of Heroes: 58/54
Name: GeForce 8800 GTS 512MB
Half-Life 2: Episode One: 151/162
World in Conflict: 37/27
Company of Heroes: 59/55
Name: Radeon 3850
Half-Life 2: Episode One: 124/129
World in Conflict: 16/35
Company of Heroes: 50/47
Name: Radeon 3870
Half-Life 2: Episode One: 148/158
World in Conflict: 26/15
Company of Heroes: 58/42
45. The hard drive is truly the unsung hero of PC components. You know the type; the kind
of component that labors away in the background while all the fl ashy components like
the CPU and videocard get all the credit. Yet the hard drive is the one single component
that is used in almost every single task you’ll ever perform on your PC.
Whether you are trying to access folders on your hard drive, surfing the web or copying
content from one location to another, your hard drive is constantly in use. Even when
you are just sitting in front of your computer, staring at the screen, the hard drive’s
platters are spinning furiously as the drive’s read/writer heads eagerly await your next
command. The millisecond you click on a folder, these heads leap into action to deliver
the data you’ve requested, and as soon as they complete your request, they return back
to their “ready and waiting” status. You could say the hard drive is the Labrador retriever
of the PC, waiting patiently with its tongue hanging out and tail wagging as you decide
what trick you’d like it to perform next. As soon as you toss the bone and say “fetch!” it’s
off and running. And the good news is hard drives today are faster than ever before
thanks to the successful proliferation of the Serial ATA spec.
This new interface doesn’t offer any major benefits over the old interface, which was
called “parallel ATA,” other than that it offers more bandwidth for future drives to take
advantage of, and is easier to add to a system due to its smaller cables and lack of
jumpers. Parallel ATA drives have to be correctly configured via jumper pins as Master
or Slave prior to use, but the newer drives have no such limitation—just plug them in
and they work. Nonetheless, eventually all hard drives will use the Serial ATA interface,
so if you are in the market for a hard drive today, you’d be wise to consider a SATA
drive in order to make your system as future-proof as possible. Plus, drive
manufacturers are only releasing their top-of-the-line drives in SATA form these days,
so if you buy one, you can be sure it’s the cream of the crop (for now).
46. What is SATA?
The parallel ATA connection standard for hard drives and optical drives has enjoyed an
unusually long tour of duty by PC standards, but it’s clear that the old spec is ready for
PATA is called a “parallel” interface because multiple bits of data travel along the 40-pin
cable simultaneously on separate channels. But the parallel ATA interface tops out at a
maximum transfer rate of 133MB per second, due to crosstalk. Crosstalk occurs when
electrical signals on adjoining wires interfere with one another. It’s like trying to have a
conversation with a friend on a crowded bus while the dumbass sitting next to you is
yelling into his cellphone. Because you’re sitting so close to Mr. Cellphone, you can only
hear his conversation, so you have to talk louder to make your conversation heard. But
then he starts talking louder on the phone, and pretty soon neither of you can hear
anything and everyone else on the bus is pissed off. That’s crosstalk, and trying to push
data through IDE cables faster just generates too much of it. And because the lasagna-
size parallel cable is already too large and unwieldy to accommodate good airflow in
today’s PCs, an even wider cable just isn’t an acceptable solution. Fortunately, there’s
another way to push data at extremely high rates while eliminating the crosstalk
problem: Serial ATA.
Instead of adding more parallel wires and channels, Serial ATA eliminates the problem
of crosstalk by using an interface that pumps data through a single channel one bit at a
time. Without the worry of electrical crosstalk, these bits can be pushed along the serial
cable much faster than across parallel ATA.
The Serial ATA cable uses seven wires, three of which are ground wires, with the other
four carrying data. Two of the data wires are dedicated to moving data from the
computer to the hard drive (downstream), and two are dedicated to carrying data from
the hard drive to the computer (upstream).
47. Q: What makes a hard drive “fast”?
A: Many factors define a hard drive’s raw speed potential, but the most important is the
rotational speed of its platters. All drives store their data on internal platters, and the
data is retrieved when the platters spin under read/write heads. The faster these little
platters spin, the faster the data can be accessed. Today’s standard desktop drives
rotate at 7200rpm, and these drives are very fast. There are also a handful of
10,000rpm drives, which are insanely fast due to their rotational-speed advantage. On
the server side of things, where performance is king and money is no object, 15,000rpm
drives reign supreme. These drives are the absolute pinnacle of performance, but not
practical for desktop tasks due to their high cost and relatively small capacity.
The size of a drive’s onboard memory plays a distinct role in its overall performance as
well, with the rule of thumb being “the bigger the better.” Onboard memory buffers range
in size from 2MB to 16MB, and drives with these large buffers deliver up to 30 percent
faster performance, on average, than drives with smaller buffers. Typically, data is
delivered from the buffer as fast as the interface allows, so the more data a drive can
wedge into its buffer, the faster it can perform typical desktop tasks.
Q: What is Serial ATA?
A: Take a look at a machine equipped with Serial ATA, and the most striking feature will
be the skinny data cables. While skinny cables have a positive impact on a case’s
internal airflow, this isn’t the main reason why the PC industry is dropping parallel ATA
(and its flat, wide cables) for SATA. The main reason is that the current parallel
interface is facing a performance wall.
Parallel ATA cables send data along multiple wires within the same wide ribbon. Each
piece of data must travel along the length of the familiar ribbon cable, and arrive at the
same time in order to maintain data integrity. In order to get more speed from this
scheme, the only option is to push the data to higher frequencies or make the data path
wider. That’s where the problems lie. Making the data path wider is impractical, as there
48. are already 80 conductors in the ribbon. And increasing speed adds to the likelihood of
Because serial interfaces don’t have to deal with coordinating multiple lanes of data,
we’re able to push them to much higher speeds. SATA launched with speeds of 150MB/
s, slightly higher than the 133MB/s offered by the fastest parallel ATA spec. 3G SATA
drives have already doubled speeds to 300MB/s, and will again to 600MB/s by next
Although current hard drive transfer rates fall far short of the maximum throughput of
even parallel ATA specs, companies are laying the foundation for the future. You don’t,
after all, wait for the traffic jam before you try to build the roads (unless you run the state
Advanced Technology Attachment. This is the parallel interface used to attach hard
drives, CD ROMs, and DVD drives to the majority of PCs on the market. The term
“ATA” is used interchangeably with the term “IDE.” Officially, there are the ATA-1
through ATA-6 specifications, which usually are written as ‘ATA’, and then the
interface’s maximum throughput. For example, the final spec of parallel ATA is
ATA/133, which allows for data transfers of up to 133MB per second.
Q: What are the different rotational velocities offered in today’s hard drives, and
what are the benefits of each?
A: Today’s desktop hard drives are offered in three rotational speeds. The slowest is
5,400 rpm, with these drives primarily being used for rudimentary storage duties where
speed is of little importance. They are affordable since they represent last-gen
technology and are not in high demand. The next fastest speed is 7200rpm, which is the
norm for today’s desktop drives.
These drives are very fast, and are more than adequate for all but the most demanding
desktop users. Finally, for those “demanding” types, there are the 10,000rpm Raptor
drives from Western Digital. These puppies are wicked-fast, and are zippier than
49. 7200rpm drives by a wide margin. The only drawback to 10,000rpm drives is that they
are currently only offered in 150GB or 300GB capacities, while 7200rpm drives are
offered in capacities ranging from under 10GB, all the way up to 1TB!
Redundant Array of Inexpensive Disks. An arrangement whereby more than one hard
drive is combined to form a single storage volume. Depending on the configuration,
better performance, better security, or both can be attained. The only way to practically
double a hard drive’s speed is to add a second drive and divide up the work between
them. It’s a process called “R.A.I.D.,” and here we see a four-drive array (the fifth drive
is the primary volume).
50. Internal Drives
Western Digital Caviar
Interface: SATA 3.0Gb/s
Avg Seek Time: 8.9ms
Avg Write Time: 10.9ms
54. External Drives
Interface: USB 2.0
Transfer speed: 480Mb/s
55. Western Digital My
Passport Elite 320GB
Interface: USB 2.0
Transfer speed: 480Mb/s
4 Plus 1TB
Interface: USB 2.0 /
Transfer speed: 480Mb/s
56. When the compact disc was introduced by Philips and Sony in 1979, vinyl records had
the misfortune to be standing directly in its path. Those black, circular monstrosities—
with their fragile surfaces and analog data—couldn’t compete with the CD’s deadly
combination of digital clarity and rugged portability. A few years later, engineers figured
out how to adapt audio CD technology for use with computer data by adding strong
error detection and correction schemes, which led to the downfall of the floppy disk.
This storage medium then evolved to DVD, which has taken over as the standard to
distribute audio, data, and video to consumers. Today, it continues to evolve at an
Both CD and DVD drives fall under the banner of “optical storage.” These drives contain
a laser, and when a disc is inserted, the laser “looks” at the surface of a disc, where
information is encoded in a single spiral track that begins in the center of the disk and
moves outward toward the edges. The laser is looking for variations in the surface of the
disc, from which it derives digital data (ones and zeroes, in other words). The spiral
track in a commercial CD-ROM contains a series of bumps and fl at surfaces called
“pits” and “lands” embedded in a clear layer just below the disc’s outer surface. These
“pits” and “lands” represent ones and zeroes and are the building blocks of data.
Recordable CDs, or “burned” CDs, work in a similar way. Commercial, write-once and
recordable DVDs use these same principles to store information.
Next-gen or not?
Optical technology is currently making another giant leap forward with the introduction
of the next generation of storage discs: Blu-ray, which increases capacity from 4.5 and
8GB all the way up to more than 30GB using a blue laser instead of the traditional red
one. But should this new format figure into your gaming system at this time?
57. For starters, a next-gen burner is a big investment. Prices have certainly dropped in the
last year, starting at $1,000 or more a year ago and now resting at half that amount
today. And that trend is sure to continue, so it may be worthwhile to wait for a while
Yes, these drives are uniquely capable of burning huge amounts of data to a single disc
(25GB per layer for Blu-ray), but the media is also quite pricey, running $12 to $15
dollars per single-layer disc and twice that for double-layer media. If data backup is your
primary concern, an external drive might be more cost-effective purchase. And transfer
times will certainly be more speedy, as even the fastest next-gen drive we’ve tested
took more than 21 minutes to fill a 25GB disc—that’s not speedy, folks.
Indeed, the majority of your disc-burning needs might best be handled by a good, old-
fashioned standard DVD drive. High-performance models, capable of speedy 18x
burns, can be had for less than 100 bucks. And DVD media is itself very affordable. This
can be a compelling stop-gap measure while you wait for “next-gen” optical to come into
Q: Does it make any difference what color or brand of media I use?
A: Which one should you use? That’s easy. Check the documentation that came with
your optical drive, or the manufacturer’s website for media recommendations. These
recommendations didn’t come about as a result of back-alley deals or bribes of exotic
whiskey. You’ll find some media brands recommended over others because these discs
have been specifically tested with the manufacturers’ drives. The proper laser strength
for each type of media has been evaluated and programmed into the drive’s firmware.
In general, we do not recommend buying cheap spindles of off-brand media, no matter
how inexpensive they are. El Cheapo vendors aren’t worried about brand loyalty, so
they skimp on quality control and you pay the price in discs that are error-prone or that
won’t retain their data for very long.
Q: Are non-combo, non-dual-format drives even relevant anymore? Why would
anyone ever buy a dedicated DVD-ROM drive, CD-ROM drive, or CD-RW burner in
this day and age?
58. A: Today’s optical storage market is divided among drives that record to CD-R, DVD-R,
or both. Drives that do both are known as “combo” drives, while drives dedicated to
recording in one format are called “dedicated” drives. A general rule of thumb is that
dedicated drives tend to offer higher speeds than combo drives, but the speed
differential is often negligible. But there are still good reasons to look at dedicated or
single-format drives. For example, Plextor’s ultra-foxy PlexWriter Premium CD-RW drive
doesn’t burn DVDs, but it offers a staggering amount of one-of-a-kind features, like the
ability to tweak laser strength for higher compatibility with your audio equipment, and to
“overburn” discs so that ordinary CDs can contain as much as 1GB of data.
Another reason to covet a dedicated drive is price. For example, if you know your set-
top DVD player can read DVD-R discs, don’t waste your money on a dual-format burner
when you can buy a less expensive single-format burner.
Some factory-pressed DVDs contain data on two layers for a total capacity of around
8.5 GB. While all DVD players, including DVD-ROMs, can fully access dual-layer discs,
all current recordable DVD formats are based in single-layer technology and are limited
59. Lite-On 20X DVDR Burner with LightScribe
Formats: DVD+R, DVD+RW, DVD-R, DVD-RW, CD-R, CD-RW
Read speed: 16X for DVDs, 48X for CDs
Access time: 160ms
Formats: Blu-Ray, DVD, CD
Read Speed: 2X for Blu-ray, 8X for DVDs, 24X for CDs
Interface: SATA 150Mb/s
Access time: 210ms for BRD, 170ms for DVDs, 150ms for CDs
60. Sony BWU-200S 4X Blu-ray Disc Burner
Formats: Blu-ray, DVD, CD
Read Speed: 4X for Blu-ray, 16X for DVDs, 40X for CDs
Interface: SATA 150Mb/s
Access time: 210ms for BRD, 170ms for DVDs, 150ms for CDs
For years, the soundcard looked as though it was headed to join the scrapheap along
with the Ethernet card, USB 2.0 card, and Firewire card. Oddly, a recent renewed
interest in soundcards indicates that this dog may still have a little hunt left in it. Creative
Lab’s X-Fi has been the premier soundcard but entries from Asus, Auzentech, Razor,
and others have recently been introduced for PC enthusiasts. Why run a soundcard
instead of the “free” onboard stuff on your motherboard? The main reason is because it
simply sound better. Onboard audio’s biggest weakness is sharing the same space as
the other electrically noisy components on a motherboard. This leads to the snap,
crackle, and humming that most people associate with bad audio. Onboard audio also
61. has a weakness in that most motherboard companies’ strengths aren’t in making good
audio; they just need to have it fulfill a checkbox on the packaging.
Today, gamers are faced with two choices: hardware audio-processing or host-based.
There’s only one soundcard series with hardware support: Creative’s X-Fi (and
Auzentech’s authorized copy). X-Fi cards will actually process the complex math for
audio on the digital signal processor (DSP) on the card. Newcomers, such as Asus’
Xonar or Razor’s Barracuda AC-1 actually process the math on the CPU and use the
soundcard as little more than a glorified I/O card to pass the audio signal out of the
system to your speakers. The argument for the X-Fi cards is that they will put less of a
load on the CPU and thus, theoretically, increase frame rates. For the most part, we’ve
found this to be true. However, with quad-core computers becoming the norm, is the
soundcard even really working that hard?
Host-based soundcards are actually quite good and offer features that DSP-equipped
cards cannot, such as real-time encoding of content to Dolby Digital. For those looking
to use the PC with a home entertainment system, a card like the Asus Xonar is a better
fit than the X-Fi.
There’s also been a push to include ever more satellite speakers in soundcards with 5.1
going to 6.1 and now 7.1 audio. While additional speakers do help, we don’t find it
practical to run seven speakers around our PC. Plus, support for 7.1 and 6.1 audio in
speakers tends to be mismatched with not all systems working quite right. The sweet
spot for someone looking for a good surround-sound experience is still a 5.1 speaker
setup. The good news is that cards that tout 7.1 support also work fine with most 5.1
If games are the main application you consider when it comes to sound, your choice
remains simple: Creative’s X-Fi.
Q: What’s the difference between 24-bit/192KHz audio and 16-bit/44.1KHz audio?
A: 16-bit/44.1KHz audio is the specification for CD-quality audio, whereas 24-
bit/192KHz sound is recorded at a higher bit rate, meaning it includes more information
62. (or bits of data) about the sound than 16-bit/44.1KHz audio. With a higher bit rate,
sound is produced with increased resolution and is able to convey more subtle nuances
than with a lower bitrate. Unfortunately, it will be a while before 24-bit/192KHz media
becomes commonplace, simply because 16-bit/44KHz is excellent sound quality by
most people’s standards. Another roadblock to the adoption of 24-bit/192KHz audio is
that if you play a CD that was engineered at 16 bits, it won’t sound better with a
soundcard that’s capable of 24-bit resolution. Most 24-bit soundcards do let you record
at that resolution, though, which is a nice feature if you do a lot of music recording.
Q: In specific terms, how badly might my 3D gaming frame rates suffer if I use a
“host-based” card that relies on my CPU for audio processing chores?
A: Most onboard sound chips (and even some add-in soundcards) offload audio
number crunching to the system’s CPU, which is generally bad. This is because, during
a 3D game, the CPU has its hands full feeding instructions to the videocard, so the last
thing it needs is more work. We know how it feels! However, by most benchmarks, the
difference in frame rates for a system using a host-based card and an add-in card is
usually less than 10 frames per second. If you have a monster gaming rig that has
frames to spare, you can afford to send some more work to the CPU. However, if you’re
running a “budget” system, an add-in card with its own audio processor is the way to go
for maximum gaming performance.
Q: Does it matter where I place the subwoofer?
A: A subwoofer produces tones that are so deep the human ear is unable to pinpoint
their location, which is why the conventional wisdom is to put the sub anywhere you like.
Your ears can’t tell the difference if it’s three feet behind you or five feet to your right.
However, there will always be a “sweet spot” in your listening area where the subwoofer
sounds best, so we
recommend playing a bass-heavy DVD (Saving Private Ryan’s opening sequence is a
good choice) or some thumping music and then moving the subwoofer around the room
while returning to your listening area to see how it sounds. Once you’ve pinpointed the
63. “sweet spot,” invite your friends over to show your home theater off a bit! For tips on
speaker placement, you also can read the article on the next page of this very issue.
2.1, 4.1, 5.1, and 6.1
Abbreviations used to denote the number of sound channels in a speaker system. The
number before the decimal point indicates the number of regular audio channels and
the number after the decimal point denotes the subwoofer (low-frequency) channel. For
example, 5.1 means five regular channels and one subwoofer channel.
Q: What do I need to play movies in Dolby Digital surround sound?
A: In order to listen to true “discrete” Dolby Digital multi-channel audio on your PC,
which is sound that is sent to separate channels from the sound source, all you need is
a piece of hardware to decode the sound into its separate channels and a 5.1 speaker
system. It’s really that simple. A Dolby Digital audio stream is a digital signal that
includes six audio channels, but these signals have to be sorted by a decoder and sent
to their respective channels in order to get that movie-theater sound separation where
you hear bullets whizzing from the front channel to the rear channel. This decoder can
either be built into the soundcard or the speakers, or it can be a separate add-on unit.
Q: How much power do I need?
A: A realistic assessment is that speaker systems that crank out 100 watts are
sufficiently loud for home use, or 200 watts for a surround sound system. However,
ultra-highwattage speakers that are capable of 500 watts or more sound better at
moderate volumes since there is absolutely no stress to the speaker’s components at
lower levels, whereas lesser speakers can become considerably stressed at lower
volumes. The real reason for speakers to have high wattage ratings isn’t to actually use
those high levels of output, but to ensure distortion-free playback at lower volumes. As a
yardstick, the 2.1 Logitech Z-2300 speakers are capable of pumping 200 continuous
watts and are pure sonic fury. We almost went deaf testing them.
Q: I have a 5.1 speaker system, but am seeing advertisements for 6.1 and even 7.1
speaker systems now. Is it worth it to upgrade?
A: Although the addition of one little speaker behind you or two on the sides may not
seem like it would make a big difference, it does. The traditional 5.1 surround sound
speaker system sounds fantastic but leaves huge gaps in the sound field behind you
64. and on the sides as well. The only catch to upgrading to 6.1 or 7.1 is that Creative Labs
is the only manufacturer selling decent systems, but it offers both a budget 7.1 speaker
system called the Inspire T7700 as well as high-end 6.1 and 7.1 systems as well. You
can also buy Creative Labs’ S700 5.1 system, which is upgradeable to 7.1 for an extra
$100. Also note that you’ll need a soundcard that supports 7.1 sound, but there are
several models on the market currently that offer this feature.
SPEAKERS FROM SPACE
Q: I bought the Klipsch GMX speakers and have a Soundblaster Live. How can I
get 5.1 sound out of them?
A: The Klipsch GMX-D5.1s were designed primarily for console gamers and include
only digital 5.1 support. For PC gamers, this setup sucks, because most soundcards
can send only a two-channel PCM signal digitally. If you want to get 5.1 sound out of
your GMX-D5.1s, you’ll need an nForce motherboard or a soundcard that can output a
Dolby Digital 5.1 stream. Unfortunately, only the Sound Blaster Audigy and Audigy 2
products can do that now. Your GMX-D5.1s are essentially 2.0 speakers, unless you
have a properly outfitted soundcard.
WORRYING ABOUT WIRES
Q: What is the best way to lengthen speaker wires that are hard-wired into the
back of the satellite?
A: The only way to deal with this tricky situation is to don your electrician cap and splice
an extra length of wire into the main speaker wire. Grab a set of wire clippers/ strippers
and clip the wire at any point.
Next, strip the cabling off the leading edge of the wires to expose the internal wires and
connect the two sections of cable. Twist the new wires together and wrap the exposed
portions of wire (the parts that used to be covered in cable sheath but are now
entwined) with electrical tape and you’re done.
65. Q: Is it OK to remove the dust covers from my speakers? I like the look of the
drivers and want to show them off!
A: It’s totally fine to remove the dust covers from your speakers. After all, that’s all they
are- covers to keep dust off the drivers. In fact, some people like the look of the speaker
drivers as opposed to the speaker grills, but to each his own. Be warned, though, that
removing the safety covers exposes your speakers to errant flying objects, mischievous
kitty cats and all sorts of desktop dangers. Personally, we like to protect our PC
hardware, so we’d leave ‘em on.
Though most people upgrade videocards and hard drives on an annual basis, they
rarely upgrade their PC case, unless tragedy strikes. The reason is simple: The ATX
specification for cases has been around a long time, and it’s still getting the job done.
Simply put, there’s usually little reason to upgrade unless you’re looking for more room,
more cooling or a more pleasing aesthetic. Indeed, these are the most important
characteristics of a case: it must be able to hold all your hardware, and have enough
fans to keep everything cool and relatively quiet.
Case fans: All cases include some sort of cooling system, though whether or not the
actual fans are inside the case at the time of purchase varies. Regardless, every case
has fan mounts, and it’s important to see what size they are prior to purchase. We
typically favor large 12cm fans because they spin slowly, and therefore are relatively
nice and quiet—and move a lot of air. You’ll want to make sure there’s a fan in the lower
front of the case to suck air into the PC, and a nice, big fan in the back to blow it out.
Some fans include exhaust fans on the top or side of the case, too, but these aren’t
always necessary and can add a lot of unwanted noise.
Form factor: The lion’s share of consumer-level motherboards conform to the ATX
specification, so make sure the case in question supports this standard (most do). Once
you know it supports ATX, the only big question left is, How big do you want to go?
There’s mid-tower cases, which are the size of what most consider to be a “regular”
66. desktop, and there’s full-tower cases, which are much larger and longer than a mid-
tower. Though their size makes them unsuitable for frequent transport to LAN parties,
full-size towers are a breeze to work in given their cavernous interiors. They can also
hold a lot of hardware, which is nice if you have several hard drives, are running SLI, or
are thinking about investing in water cooling. For most users, however, a mid-tower will
be more than sufficient.
Construction: The materials that make up your prospective case don’t really matter
that much. What does impact the equation—and your arms—is the case’s weight. If a
case is cumbersome before you put anything into it, imagine its heft once it’s stuffed
with optical drives, hard drives, videocards, water-cooling reservoirs, etc. With that said,
you’ll only infrequently tote your case, so don’t skimp on quality just to get a lighter
case. It’s also important that the outside of your case is durable. If it gets all marked up
the second you run your fingernail across it or if it feels flimsy to the touch, move on.
What good is a sweet enclosure that turns ugly - or worse, broken - within a few weeks?
Features: Your case’s features can range from the truly useful to the simply cool. A
slide-out motherboard tray, for example, is a feature we’re always keen on. Toolless
drive bays are also welcome. Then, of course, there’s all the whiz-bangery (or lack
thereof) to consider: We’re talking LED fans, built-in gauges, locking systems, etc. Case
innovation can be a slippery slope, though, as sometimes these features are actually
more irritating than useful (we can’t count the number of poorly implemented screwless
PCI holders we’ve broken). A feature doesn’t have to be new to be unique—the simple
addition of changeable side panels to a case kicks ass, and there’s nothing overly fancy
about replacing a window.
Front-mounted connectors: This feature used to be resigned to USB ports mounted
on the front of the case, but with Firewire and eSATA making headway in the market,
you’re going to want a case that gives you at least two of these connectors to play with.
And one should be eSATA – its speed benefits destroy anything Firewire or USB-based,
making it a perfect connection for that external backup drive of yours. Be sure to pay
attention to the location of these connectors, as sometimes they’re on the bottom-front
of the case, and other cases put them right at the top, which is nice if your PC is resting
on the floor.
Simply put, you don’t want an ugly case. But far be it from us to decide what’s atrocious
versus what’s attractive, as everyone has his own personal sense of style. While we
personally hate cases that look like they were pulled straight out of the X-Files prop
shop, some people are into that sort of thing. Of course, these same people might very
well hate a case that’s covered in branding for a particular professional gamer.
68. Antec 900
Antec’s Nine Hundred is solidly constructed and surrounded by enough air cooling to
bring Dorothy back home to Kansas. Shoot, we were effectively “blown away” by the
Nine Hundred, hereafter dubbed “the 900,” which is a fi ne example of case
craftsmanship, despite a few minor flaws.
The case’s internals are pleasantly predictable. Three 5.25-inch bays and six 3.5-inch
bays reside behind the case’s stylish front panel, and the full grill not only looks sharp
69. but also improves the 900’s ability to generate ample airflow. Two 12cm blue LED fans
suck air across your hard drives and into the eye of the storm, and a 20cm fan churns
on the 900’s ceiling.
And that’s not all! Another fan at the rear of the case helps make the 900 an ideal
solution for those who prefer air cooling to water cooling. Heck, you can even install an
additional fan on the case’s side window grill—a pleasant bit of overkill.
“Hurricane” is an apt term to describe the force produced by the 900’s fans at full tilt, but
if going deaf isn’t your thing, Antec has wisely given users the ability to customize
speeds via a little switch on each fan.
The 900’s few flaws—a hard-to-remove side panel, a ton of drive-bay thumbscrews, and
no eSATA port—are hardly enough to dump rain on this case’s parade.
70. Coolermaster Cosmos
We tipped our reviewing hand when we chose this case to house this year’s build it
machine. But that’s just how sweet the Cosmos is. This case looks as good as it
functions, and there’s nary a blemish in either area. More important, the case retains
71. enough of a unique look and feel to distance itself from the bevy of generic models we
You don’t need to grab a screwdriver to make major changes to any parts in the
Cosmos case (aside from the motherboard). The fi ve front 5.25-inch bays use an
awesome push-button locking mechanism that, to date is the best we’ve come across.
Tiny thumbscrews hold the six hard-drive trays in place - an elegant improvement over
standard drive bays.
The Cosmos caters to the water-cooling crowd with its ready-for-a-radiator ceiling grills,
but lovers of the air won’t be left out. A detachable 12cm fan bunker pulls in air from the
bottom of the case, and a plastic bar running horizontally across the case draws cool air
right into the videocard area. Strangely, there’s no airflow across the hard drives in this
case, one of the very few oversights we were able to find with the Cosmos. A lack of
functioning drive-activity lights on the case’s front panel is another stinger, but it’s not
enough to destroy the taste of this sweet, sweet chassis.
Q: Let’s start with the basics. What is a CRT?
A: Cathode Ray Tube. CRT monitors are just fancy implementations of the same
technology used in TVs: An electron beam originating from the base of
a vacuum-sealed tube scans across the tube’s screen, which is covered with a layer of
phosphor material. A metal grating or wire mesh limits how much of the electron beam
can hit individual phosphor clusters, thus leading to an acceptably sharp image. When
the phosphor material becomes excited, it glows either red, green or blue. Mix up
several differently colored phosphor clusters and suddenly you have millions of colors.
Although it is possible to make flat or nearly flat cathode ray tubes, most older models
exhibit some curvature, at least around the corners.
Q: But I have no desk space! What is an LCD?
A: Liquid-Crystal Displays are modern alternatives to CRTs. LCD manufacturing starts
with a flat pane of glass, which is then layered with a grid of small transistors; the
transistors are arranged in groups of three, and each triad describes a screen pixel.
72. When excited by electricity, these transistors can be made to open and shut. Put a
backlight behind the transistor grid and, behold, you have an image. There’s more to it
than that, but that’s the basic idea.
Q: What type of display is the best for gaming?
A: The knee-jerk recommendation has long been “a primo 19-inch CRT.” Why?
Because CRTs were better suited for gaming than LCDs, and because 19-inchers have
always been great values, price-wise. However, in today’s modern age of affordable
LCDs with blazing-fast refresh rates that eliminate the ghosting and bluring effects that
plagued earlier generations of monitors, we at PC Gamer are all on 20” widescreen
LCDs, and we’re loving them. Not only do they save desk space (and your back, if you
ever want to move them), but the image is as sharp and clear as you could want. The
one drawback is that they don’t look as good if they’re not at their native resolution, but
that’s a small price to pay.
Q: What is an optimal refresh rate?
A: On an LCD, this is a no-brainer—use whatever refresh rate the manufacturer tells
you to use for the native resolution of the panel (this is the resolution the display has to
run at in order for everything to look normal, i.e., not stretched out, squished, or jaggy).
This is almost always 60Hz, even if the monitor may be able to handle higher. Don’t
worry - LCD pixels don’t fade and strobe the way CRT pixels do, so 60Hz won’t cause
eyestrain. CRTs are a little trickier. While some would answer “as high as a resolution
as the CRT will allow,” we recommend taking a more cautious approach. First, make
sure you’ve loaded the Windows drivers for your particular display before you mess with
the refresh rate settings. With the driver loaded, Windows won’t let you choose a rate
higher than the monitor can display without damage to its circuitry. Our advice is to stay
away from 60Hz, but 75Hz or higher will be just dandy.
This figure is the size of the LCD panel measured diagonally from corner to corner.
Desktop screens range in size from 15 to 24 inches and beyond. We consider 19 inches
73. the minimum for all-purpose computing. You need at least that much screen real estate
to work in multiple windows comfortably, and to thoroughly enjoy high-definition video
and PC games.
A display’s aspect ratio is its screen width divided by its height. The majority of desktop
monitors have an aspect ratio of 4:3, regardless of their screen size; and the majority of
software applications and computer games are designed accordingly. This is something
to bear in mind if you’re considering a widescreen model, which typically has an aspect
ratio of 16:10. If content, such as a game, insists on a 4:3 ratio, the display will stretch
the content to fill the entire screen, making everything look fatter than it should. This
situation is becoming less of a problem, as most games support at least one widescreen
mode that won’t look distorted.
Every LCD sports a fixed number of pixels arrayed in a grid that is a certain number of
pixels high and a certain number of pixels wide. The native resolution is the width of the
display (in pixels) by the height (in pixels). The native resolution will deliver an optimum
picture. While it’s possible to run an LCD at a lower, non-native resolution, the image
will be rescaled and the display will use interpolation to fill in the missing pixels, which
can degrade image quality. Native res and interpolation quality are of particular concern
to gamers, who often run games at low resolutions to get the best frame rate. An LCD’s
native resolution is typically determined by its screen size. For example, many 19-inch
monitors have a native resolution of 1280x1024, while many 20-inch models have a
native resolution of 1600x1200. A higher resolution makes everything look smaller
onscreen, but also gives you more desktop space.
Today’s LCDs connect to the graphics board via either an analog VGA connector or a
digital DVI connector. If your graphics board is equipped with DVI outputs—most
modern boards are - we recommend you use DVI to connect to your LCD. Unlike CRTs,
which must refresh every pixel on the screen 60-plus times a second, LCDs modify
pixels only when they change. The analog connection is less precise because the digital
information must be converted to an analog stream in order to travel to the LCD, where
74. it is then analyzed and converted back to a digital format. This is a recipe for data loss
or corruption in the image that is ultimately displayed on-screen.
Pixel response time
This spec has been getting a lot of play lately, so it deserves mention. A pixel’s
response time, measured in milliseconds, describes the time it takes for a pixel to
change from its on state to its off state and then back on again. If the response time is
too slow, you’ll see ghosting and other artifacts because the display’s pixels can’t keep
pace with the information sent from the graphics card. This problem is particularly
noticeable in games, which tend to have fast action sequences. A response time of 25
milliseconds was once the norm, but it’s not uncommon these days to see response
times listed in the single digits. As impressive as this spec sounds, it should be taken
with a grain of salt. Different manufacturers report response times differently, so this
spec isn’t a reliable means of comparing different brands. Some vendors report only the
pixels’ rising (turning on) or falling (turning off) time; others report how long it takes for a
pixel to turn on, turn off, and then turn on again; and still others report the time it takes
for a pixel to go from peak white to full black. (Pixels change from white to black much
faster than they change from gray to gray, but the latter is a more common occurrence
in real-world use.) Because of this inconsistency, we don’t normally report on a display’s
pixel response time, but we mention it here to illustrate a point: Response-time specs
often do not jibe with qualitatively measured performance. The best way to determine
an LCD’s abilities with fastpaced content, in our opinion, is to eyeball it first-hand.
Obviously, the more ability you have to adjust your screen’s height, tilt, and orientation
to fit your body, the better.
Cushy and responsive keys are the hallmark of this latest gaming keyboard from
Microsoft. Designed in conjunction with the wizards at Razer devices, this pad has blue
backlit keys and a pair of USB ports. A set of programmable keys lines the top of the
keyboard, bracketed by two 360 degree jog dials to adjust volume and whatever else
78. G15 GAMING KEYBOARD
The Logitech G15 Gaming Keyboard is designed for gamers of all types. FPS players
will benefit from the built-in folding LCD display that shows vital stats, and MMO junkies
have a near limitless number of macro keys to program for their every need.
Saitek’s Eclipse II is a simple, comfortable keyboard with backlit keys and a heavy stay-
put base that gives you extra insurance against mishaps. The backlit keys can also
rotate between one of three colors with a hit of a button. We’re just a little bummed out
that the board doesn’t have any extra USB ports.