Why Some Games Need 64-Bit Support

In the world of gaming, there has been a recent move by major publishers to release their games with 64-bit support. World of Warcraft, despite running for years on 32-bit, was patched for 64-bit support. Many people have decided that this is the way that gaming was going to move forward. But why exactly is 64-bit support so important for certain games? With the release of EA’s The Sims 4 with 32-bit-only support, there was some debate as to whether the game should have expanded to 64-bit, at least for machines that were compatible. Why is this?

To understand why people desire support for 64-bit in games and applications, we have to understand what “64-bit” means. Your CPU processes tasks with a certain maximum bit width. The CPU has registers fixed at certain sizes (8-bit, 16-bit, 32-bit and 64-bit). The largest register determines what’s the largest number that can be passed directly into the CPU without extra instructions. In 32-bit processors, that is 2,147,483,647 or 4,294,967,295. It depends on whether you’re using signed integers (which allow for negative values) or their unsigned (positive values only) equivalents.

64-bit processors allow for much, much larger numbers to pass into the CPU (the maximum value being 18,446,744,073,709,551,615 for unsigned integers).

The basic idea that you should be able to take away from this is that 64-bit CPUs allow for much larger numbers when performing rapid calculations and retrieving addresses in memory. That’s why 32-bit processors only support up to 4 GB of memory. Speaking of memory…


Whenever a game would start pushing the limits of the memory of the system running it, there would be a push to try to work within those constraints, which meant that caching things like characters and objects was out of the question in some instances. Let’s take The Sims 4 for instance. When the game state changes, it must reload all of the characters just like you do when you first load the game. It doesn’t cache them (which would make loading times much faster) because memory is limited to 4 GB in its 32-bit architecture. Even if it’s running on a 64-bit CPU, the game itself is written in a way that only supports the CPU’s inferior 32-bit registers.

In short, games that have 64-bit support can cache much bigger chunks of its data. This means that you get faster loading times and possibly enjoy things like autosave in games that are traditionally memory-hungry.

In 32-bit architectures, dealing with decimal points is very cumbersome. In most cases, though, the seven decimal digits you’re allowed is sufficient. But what if you’re trying to store a very high-precision value? Games are evolving and in many cases require more than seven decimal points to calculate something (like the decay rate at which energy goes down, or something like that). In 64-bit architectures, you can allocate double-precision floating point numbers, allowing you to work with up to 16 decimal digits.

Yes, you were able to do this in 32-bit processes, but it required a workaround where the value would actually be two pieces of memory stuck together with duct tape. This made processors run through more instructions just to assemble the values into a proper decimal number. This means that a number like 4.2592039521510 would occupy two different locations in your RAM, instead of being one single value.


Despite the whole conundrum of 32-bit vs. 64-bit in games, there’s one thing I think you should know: This doesn’t have anything to do with graphics. You see, graphics cards have evolved significantly to include bit widths that are much larger than what your CPUs have (many of them having bit-widths up to 256 bits!). What 64-bit CPU support does to your games is allow them to create better decision-making engines that operate with your memory more efficiently. The graphics will still be the same, but the game will be smarter and more intuitive.

If you feel like there’s something to add to this discussion, please leave a comment below!