The End of Moore’s Law: Preparing for the Future

No doubt, our modern life depends on computers, and computers depend on silicon-based processor chips. Computers continue to improve as time goes by due to better processing power.

Moore’s law is the observation that computer chips get faster, are more energy efficient, and cheaper to produce at a predictable rate. About every eighteen months the number of transistors placed on a silicon chip doubles. Each new generation of computer chip has smaller performance boosts than the one before.


Moore’s Law is not a law like Newton’s Three Laws of Motion. Instead, it is an observation of what was happening in the chip-making industry.

Moore’s law will end. There will be a time when we will no longer be able to fit more processors onto a single silicon chip. Silicon chips seem to have reached their peak when it comes to performance and efficiency. When it ends, silicon chips will no longer be able to house additional transistors. However, new computers and technology will require more powerful and agile processors.

While some believe that there can still be Moore’s Law style improvements in speed until at least 2025, there is a risk that Moore’s law will come to an end before a viable replacement is ready, so we need to explore alternatives for silicon-based computing today.

Quantum Computing

Quantum computing uses the power of quantum physics, harnessing the power of subatomic particles. It will deliver currently unimaginable processing power and speed provided by what they call “qubits.”


The main problem with quantum computing right now is that those working with the concept have yet to break past the speed with which a task is already being completed using conventional silicon-based technology. That speed has remained just out of reach.

Graphene and Carbon Nanotubes

Graphene is a single layer of carbon atoms that is believed to be the strongest material on earth. It is 200 times stronger than steel yet elastic enough to be stretched another 20% to 25% of its original length. It’s exceptionally lightweight and conducts heat and electricity better than other known materials. Graphene is made of carbon, so it is extremely plentiful, but it may be years until it is available for commercial production.


Graphene cannot be used as a switch. Silicon semiconductors can be turned on and off with an electrical current, but graphene cannot, so using graphene would result in a computer that cannot be turned off.

If graphene can replace silicon chips, we see the possibility of technology like foldable laptops, lightning fast transistors, and cell phones that will not break.

Nanomagnetic logic

NML depends on arrays of nanomagnets. These magnets range in size from a few nanometers to a few hundred nanometers. Nanomagnets work like silicon, but instead, the process relies on the switching of magnetization to create the binary code. It uses dipole to dipole interactions (the interaction between the north and south pole of the magnet) to transmit data, and because it does not require electricity, it needs only a small amount of power to run.

Cold computing

While this is not necessarily a brand new technology, it is a concept that manufacturers see as a way to extend the life of Moore’s Law. By reducing the temperature of the chip, there will be less leakage of current. That cold temperature will reduce the threshold voltage at which transistors switch. Using cold computing may get us an additional four to ten years of scaling in memory performance and power.

Compound semiconductors

Semiconductors created from two or more elements are faster and more efficient than silicon alone. These semiconductors are already available and will soon be finding their way into 5G and 6G phones, giving them more speed, a smaller size, and better battery life.



Technology has evolved to the place where we are can manipulate materials down to the atomic level. Chip technology is no exception. IBM has devised a possible way to store data on a single atom. Today it takes 100,000 atoms to store a single 1 or 0.

Atoms are by nature unstable, so for this to be a viable option, more logic for things like error correction will be needed.

Which replacements are most likely?

Compound semiconductors are the only option for silicon-based processors that are viable today. Beyond that, the technology that seems to be the most promising at the moment is the use of nanomagnetic computing. It’s also possible that computers of the future may contain layers of various technologies, each to counteract the disadvantages of the other. But at the moment, no one can accurately predict what the computers of the future will look like.

Tracey Rosenberger Tracey Rosenberger

Tracey Rosenberger spent 26 years teaching elementary students, using technology to enhance learning. Now she's excited to share helpful technology with teachers and everyone else who sees tech as intimidating.


  1. > so we need to explore alternatives for silicon-based computing today.

    Informative article but I couldn’t help but feel that programs themselves haven’t advanced much. Not a gamer, for example, but from someone on the outside looking in, so many games seem like “modern” versions of Doom (which ran well on a 386DX, 40mhz).

    It seems that, aside from from heavy graphics, so much of our computing demands are extraneous: support for scores of connections to ad-networks, unnecessary synching/telemetry, javascript mining, auto-playing videos, so much more… Most web pages, for example, comprise a kilobyte of actual content and (often) tons of megabytes for “who-knows-what-anymore”…

    That’s just web pages. We start talking about local applications (electron, built-in ad networks,, etc) and we’re seeing more unnecessary code bloat. We move into most commercial OSes (face-id, voice recognition, ad frameworks, spyware): the same.

    We need a new (observational) law for software bloat.

    A lightweight OS, lightweight Apps, an ethical Operating System goes a long away in buying the Quantum Computing breakthrough more time to develope: so we can be further surveilled and tracked (that’s where our computing power is going in the consumer market).

    1. What you wish for is not going to happen for two basic reasons. One, companies and sites want to make as mcu money as possible so they bombard use with ads. Every ad displays makes them some money. Every ad click on makes them more money. There are sites now that will not work unless you disable your ad-blocker. There is also technology for sites to bypass ad-blockers and display ads anyway.

      The second reason is the fault of the users. They keep on insisting on more “user friendly” programs and apps, more features and prettier interfaces. All that can only be achieved by more and more lines of code which increases the size of applications. It is a paradox, users want more features, pretty GUI and “user-friendly” apps but then complain bitterly about the “bloat”.

      Hardware vs. software is vicious circle. As soon as a more powerful CPU is developed, software developers upgrade their apps to take full advantage of the new chip’s capabilities. They add so many features and glitz that the app brings the CPU to its knees. Which forces a new and more powerful CPU to be developed, which in turn allows for more features and glitz in apps. Rinse and repeat, ad nauseam.

  2. I wonder what sites that won’t run unless you disable your ad-blocker have to say? I never see them because, sorry, my ad blocker stays on under all circumstances (and I still ignore plenty of ad that do get through). Time for a new paradigm where companies pay me directly to look at their ads.

    1. “I wonder what sites that won’t run unless you disable your ad-blocker have to say? ”
      They say that ad blockers are destroying the Internet. They also say that if you use an ad blocker, you are taking food out of their children’s mouths.

      However, never fear, sites are starting to use anti ad blocker software that allows ads to be displayed anyway.

Comments are closed.