3 Questions About Video Card Specs You’ve Always Wanted Answered

In the past week, I had written a piece about obscure GPU specifications. While it did expose some of the most confusing parts of specifications in video cards, a number of readers have engaged with the article, saying that it could have been more detailed. It would only be fitting to answer more questions about the components of the graphics card that weren’t necessarily answered by that article in this piece. Without further ado, here are some answers to the most pressing questions about video card specifications that don’t seem to be fully explained in terms everyone can understand on the Web.

What is CUDA, and what is a CUDA core?


Compute Unified Device Architecture (CUDA) is a feature in most newer Nvidia graphics cards that allows the computer to use a part of the GPU (or even the complete GPU) as an “assistant” to the processor. GPUs pack much more muscle than computers, but their architecture has historically been more optimized for calculating drawing distances and polygons (which is why they’re slapped onto graphics cards in the first place). CUDA transforms the GPU into a math geek that can crunch numbers very quickly, using the insane muscle power of a GPU for other things than simply rendering and displaying graphics on the screen.

In the article linked to at the beginning, I explained that SETI@Home takes advantage of CUDA by using graphics cards to perform calculations. This is just one example of how CUDA can be used to do amazing things. CUDA can also be used to transcode video (convert it from one format to another) using a special codec that communicates with the hardware. Nvidia’s encoder is known as NVENC, and it’s a powerful way to encode video much more quickly using the graphics card video engine as opposed to exhausting your CPU. If you’re a developer, and you’re interested in including NVENC in your program, you can see Nvidia’s resources here.

OK, so now we know what CUDA is. What about CUDA cores?

A CUDA core is one segment of the GPU that can be used for the purposes of CUDA. It’s the piece of the GPU that some monitoring programs call the “Video Engine.” Each core is a little piece of the entire GPU’s architecture which can be used for both traditional 3D rendering or CUDA-specific functions. In most graphics cards, the entire GPU is available for CUDA work. This means that the number of CUDA cores in the GPU actually defines how many cores the entire GPU has.

Why do GPUs have so many cores?

While today’s CPUs typically have four to eight cores, there are graphics cards out there with over 5,000 cores! Why is that, and why can’t CPUs have such an insane amount of cores?

The GPU and CPU were both made for different purposes. While a CPU reacts to machine code in order to communicate with various pieces of hardware on your computer, the GPU is made for only one specific purpose: It’s supposed to render polygons into the beautiful scenes that we see in 3D-accelerated environments and then translate all of this stuff into an image 60 times or more per second. That’s a tall order for a CPU, but since the GPU has compartmentalized polygon processors, it can split the workload among all of its cores to render a graphical environment within a few milliseconds.

That’s where the cores come in. A GPU needs all of those cores to split massive tasks into tiny pieces, each core processing its own part of the scene individually. Applications that use CPUs (like your browser) don’t benefit from having such an enormous number of cores unless each core has the muscle power of an entire processing unit. Your browser relies on fast access to information as opposed to the compartmentalization of tasks. When you load a webpage or read a PDF file, you only need one processing stream to load all of that up.

Does more RAM make a video card better?


RAM is a bit of a weird gray area with video cards. While it’s nice to have as much RAM as possible, you also need to be able to use all of that RAM. A video card with 1024 MB of RAM and a 192-bit-wide bus is going to perform much better than a video card with 2048 MB of RAM and the same bus.

As I have explained in the previous piece, the 2048 MB video card will experience something called “bandwidth bottlenecking” because the bus (the road that data travels on) isn’t wide enough to carry a sufficient amount of data in a short amount of time.

In short, no, more RAM isn’t necessarily better if the video card doesn’t have a wide bus. Here’s my guide to proper bus width: Your video card should have a maximum of eight times the amount of RAM in megabytes as the number of bits in the bus. For example, a 1024 MB card should have at least a 128-bit bus (1024 / 8 = 128). So, for a 2048 MB card, I recommend a minimum of 256 bits.

If you still have more questions, be sure to ask them in the comments below!

The Complete Hardware Buying Guide

The Complete Hardware Buying Guide

Keen to learn how to choose the hardware for your rig? The Complete Hardware Buying Guide shows you what to look out when buying the hardware.

Get it now! More ebooks »


  1. Well done. If you’re pondering whether to continue this as a series, I believe you’d have an audience.

  2. so i guess my 4 gig geforce vid card isnt that good then huh?

    1. It depends on the bit width of the memory bus. If you’re on 4 gigs but relying on a 192 bit bus, all of that memory isn’t going to be properly utilized in fast-paced gaming situations.

  3. How can a video card be used to process photos such as Photoshop or Photoshop Elements?

    1. The proper question to ask would be “Is it really necessary to process photos through a GPU?” Photos are not math- or polygon-intensive objects. They take up a maximum of 50 MB of computer space and therefore can be processed just fine with a CPU and any amount of RAM. It’s just not feasible to create an entire photo rendering engine that uses a new set of hardware. I don’t think such a thing will ever be explored.

      1. Where are you getting this info?

        Adobe has been trying to add more and more features that take advantage of GPUs since several versions ago of its Creative Suite. Some features are for general use (zooming, panning, canvas rotation) while others are for computationally-heavy filters (realistic blurs, warping tools, noise reduction)

        They packaged their engine efforts as “Mercury” in CS6 but they were already doing GPU accelerated stuff before that, and continue to do so in CC.

        “Photo processing” means manipulating large amounts of pixel data. It’s inherently math intensive and is precisely what GPUs are supposed to be good at.

        If you want to see how demanding and slow processing photos can be, just ask anyone who uses Lightroom to profesionally process collections of raw files from any modern DSLR.

        1. A good CPU will be able to process most photos sufficiently. Although, admittedly, the immense 40 MP photos some would have to process can be a bit taxing on both memory and CPU resources. Processing 40 minutes of 4K video, however, is much more of an intensive process.

          While Adobe’s GPU acceleration works for rendering in CS6 and even CS5, it doesn’t give you the option (natively) to encode your final video using NVENC while exporting your project. I had to download a very fidgety plugin for that, and it works around the entire Adobe system, which isn’t entirely efficient but still beats using a CPU-only export method.

  4. “CUDA can also be used to transcode video (convert it from one format to another) using a special codec that communicates with the hardware. Nvidia’s encoder is known as NVENC, and it’s a powerful way to encode video much more quickly using the graphics card video engine as opposed to exhausting your CPU.”

    So a CUDA video card should speed up converting videos in Handbrake or another similar app? Does the software have to be videocard (CUDA) aware? Or is that handled by the card’s codec? Does the card’s codec replace the software codec, i.e. Xvid, DivX, h264, etc.?

    1. The card doesn’t have the codec. Yes, it’s natively capable of H.264 transcoding, but the software needs to be able to work with CUDA in order for this to happen. I normally just use a plugin for Premiere Pro CS6.

      1. Thanks, I appreciate the reply. My interest in getting a CUDA or similar video card would be for video encoding. What I am doing is encoding DVDs and Blu-rays to MKVs (using discs I own) with H264 and Handbrake. Recently H265 has come on the scene. My system as currently configured is fairly speedy with DVDs, but takes much longer with Blu-rays. Your information concerning offloading this to a GPU is probaby the best solution for me, so the more info I can get about this the better. I had been considering more CPU, and thankfully I ran across this article before going that route. It would seem that I need a primer concerning this, so if you have more information, could you post some links? Thanks very much. BTW, what is Premiere Pro CS6?

        1. Premiere Pro CS6 is a professional video editing and encoding software from Adobe Systems. The plugin I’m referring to is very difficult to install.

          Seems like Handbrake can do what you’re asking for already: https://trac.handbrake.fr/wiki/HardwareAcceleration#no1

          They call it “hardware-accelerated encoding”.

  5. Miguel — Thank you, so much, for listening to your readers. This was an excellent article!!! As I commented, on your first article, there are many like me, who have a very, very basic knowledge of what a GPU or a CUDA are and do not realize, that just because your “new” video card has 2GBs of RAM, but, only has a 128-bit bus, it will not be any faster than the 1GB of RAM, with a 128-bit bus.

    I am getting a much better knowledge, of what a video card can do and how to look for the best video card … For my needs! Miquel … Again, thank you for this article and the ones coming in the future.

Comments are closed.

Sponsored Stories