Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jan 27;14(1):447.
doi: 10.1038/s41467-023-36020-2.

Fundamental energy cost of finite-time parallelizable computing

Affiliations

Fundamental energy cost of finite-time parallelizable computing

Michael Konopik et al. Nat Commun. .

Abstract

The fundamental energy cost of irreversible computing is given by the Landauer bound of [Formula: see text]/bit, where k is the Boltzmann constant and T is the temperature in Kelvin. However, this limit is only achievable for infinite-time processes. We here determine the fundamental energy cost of finite-time parallelizable computing within the framework of nonequilibrium thermodynamics. We apply these results to quantify the energetic advantage of parallel computing over serial computing. We find that the energy cost per operation of a parallel computer can be kept close to the Landauer limit even for large problem sizes, whereas that of a serial computer fundamentally diverges. We analyze, in particular, the effects of different degrees of parallelization and amounts of overhead, as well as the influence of non-ideal electronic hardware. We further discuss their implications in the context of current technology. Our findings provide a physical basis for the design of energy-efficient computers.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1. Assumptions for ideal serial and parallel computers.
a Schematic illustration of the three main assumptions: (i) The total time T available to solve a given problem requiring N irreversible computing operations is limited; (ii) an ideal serial computer (left, blue) reduces the time per operation τs with increasing problem size N; (iii) an ideal parallel computer (right, red) is able to increase the number of processors n proportional to the size N in order to keep the time per operation τp constant. b Optimal 1/τ behavior of the energy consumption of a single algorithmic operation of duration τ, and the effects that the serial (blue) or parallel (red) computing strategies have on the energetic cost of computation.
Fig. 2
Fig. 2. Finite-time Landauer bound for ideal serial and parallel computers.
a Energy consumption per operation, W/N, for solving a fully parallelizable problem of size N by an ideal serial, Eq. (3) (blue), and parallel, Eq. (4) (orange), computer. The energetic cost diverges with N for an ideal serial computer and remains constant for an ideal parallel computer. b Maximal number of bit operations NmaxserNmaxpar that can be performed by an ideal serial, Eq. (5) (blue), and parallel, Eq. (6) (orange), computer in the finite time T=1 s within a given power budget Pmax. Parameters are T = 1 K, b = 1 and a = 8⋅10−7kTs.
Fig. 3
Fig. 3. Fundamental limit and extrapolated energy cost per operation for ideal serial and parallel computers.
Fundamental limits obtained for a = h (Planck constant; T = 1K) (solid lines) and extrapolated energy cost corresponding to a = 2.3kTs (Xeon Phi; T = 330 K) (dashed lines) for ideal serial, Eq. (3) (blue), and parallel, Eq. (4) (orange), computers. The measured value for a Xeon Phi processor is represented by a black X. For reference, an energy cost of 1 J/operation is shown as a dash-dotted line. Parameters are T=1 s and b = 1.
Fig. 4
Fig. 4. Effects of not ideally parallelizable problems.
a Energy cost per operation Wtotcom/N for a partially parallelizable algorithm that has no overhead, Eq. (7). b Energy cost per operation Wtotove/N for a fully parallel algorithm with linear overhead Nove(n) = cn, Eq. (9), and c = 0 − 5000%. Same parameters as in Fig. 2.

References

    1. Moore GE. Cramming more components onto integrated circuits. Electronics. 1965;38:114–117.
    1. Theis TN, Wong HSP. The End of Moore’s law: A new beginning for information technology. Comput. Sci. Eng. 2017;19:41–50.
    1. Waldrop MM. The chips are down for Moore’s law. Nature. 2016;530:144–147. - PubMed
    1. Arden, W. et al. “More-than-Moore” White Paper, International Roadmap for Devices and Systems (IRDS) (2015).
    1. Landauer R. Irreversibility and heat generation in the computing process. IBM J. Res. Dev. 1961;5:183–191.