China's supercomputer breakthrough uses 37 million processor cores to model complex quantum chemistry at molecular scale — Sunway fuses AI and quantum science
Normally, simulations for scientific research are performed on supercomputers as they require tremendous compute throughput. There are also types of research — such as simulation of quantum behavior of molecules with exponentially more interacting states — that require quantum computers to simulate them, or simplifications to make the task suitable for modern supercomputers. However, Chinese scientists from Sunway have successfully used an AI model and an existing Oceanlite supercomputer to model complex quantum chemistry at the scale of real molecules, which is both a scientific and technological breakthrough, reports VastData.
A quantum state in quantum mechanics — described by a wavefunction (Ψ) — determines all possible configurations of a quantum system, such as the positions, spins, or energy levels of particles like electrons in a molecule, along with their probabilities. Modeling it is challenging because the state space grows exponentially with the number of particles, making it impossible (and not feasible) to simulate on classic supercomputers that we use today. To that end, scientists use a variety of approximation methods to simplify the quantum equations while preserving accuracy to describe molecular structures, reactions, and energies. However, the scaling of existing methods that approximate the wavefunction is limited to small molecules.
To study many-body quantum systems with strong electron correlations (e.g., tens of electrons, over 100 spin orbitals, etc.), several years ago, physicists proposed using modern machine-learning surrogates, such as neural-network quantum states (NNQS), to approximate all the possible configurations and motions of electrons within a molecule. This method promises to wed AI scalability with quantum accuracy for research that is currently impossible using traditional methods.
To conduct their experiment with 120 spin orbitals, researchers developed their own NNQS framework. Their simulation trained a neural network to approximate the molecule's wavefunction, determining where electrons are most likely to be. For each sampled electron arrangement, the system calculated the local energy and adjusted the network until its predictions matched the molecule's true quantum energy pattern.
The proprietary NNQS that was tailored for China's Oceanlite supercomputer, based on a 384-core Sunway SW26010-Pro CPU, which supports FP16, FP32, and FP64 data formats and features a very unique architecture tailored for HPC rather than for AI. In particular, they had to keep in mind how the SW26010-Pro parallelizes workloads and handles data.
The researchers have designed a hierarchical communication model where management cores handled coordination between processors and nodes, while millions of 'lightweight' 2-wide compute processing elements (CPEs) featuring a 512-bit vector engine performed local quantum calculations. In addition, they created a dynamic load-balancing algorithm to ensure that uneven computational loads did not leave any cores idle.
The scientists ran their code on 37 million CPE cores and achieved 92% strong scaling and 98% weak scaling, a high level of efficiency for such a scale, which highlights that the developers have found a near-perfect synchronization between software and hardware, a major accomplishment for China's supercomputer community. Also, to date, the simulation of molecular systems with 120 spin orbitals is the largest AI-driven quantum chemistry calculation ever performed on a classical supercomputer, marking a breakthrough for PRC's AI and quantum industries.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Without any doubt, the achievement demonstrated that NNQS can be used for quantum physics research on modern supercomputers. However, it is impossible to find out whether it is efficient to use an exascale supercomputer like Oceanlite for AI-based quantum physics research, both in terms of effort and power.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
twin_savage I'm seeing some software vendors and "experts" push DNN surrogate models to solve traditional FEA problems and it is a mess; they are very often inaccurate and don't produce an error function like a PCE surrogate would so they don't even quantify how inaccurate the results they spit out are.Reply
I can't imagine the surrogate model for spin orbitals is any better. -
ejolson After constructing quantitative neural networks at a much smaller scale the observation here is they are able to create good approximations from less data but as the quality of data increases the quality of the approximations do not get appreciably better. It could just be me.Reply
At any rate, Oceanlite turned out to be a good research platform for people doing interesting work. It also demonstrates just a sampling of what DEC might have created if not destroyed years ago by Compaq for its sales team.
I'm disappointed Sun was destroyed in a similar way, but that's a different story.