The paper presents the first calculation of three-loop massless QCD and QED helicity amplitudes for light-by-light (LbL) scattering ($\gamma \gamma o \gamma \gamma$). By utilizing Lorentz tensor decomposition and advanced IBP reduction techniques, the authors produced remarkably compact analytic results that achieve NNLO precision, showing strong agreement with ATLAS experimental data from ultra-peripheral heavy-ion collisions.
TL;DR
Researchers have successfully calculated the three-loop massless QCD and QED helicity amplitudes for light-by-light (LbL) scattering ($\gamma \gamma o \gamma \gamma$). This represents the NNLO (Next-to-Next-to-Leading Order) frontier for this process. By transforming a mountain of algebraic complexity into a remarkably compact set of 23 transcendental functions, the team provided theoretical predictions that align perfectly with ATLAS experimental data from the LHC, revealing that NNLO corrections are far more significant than previously assumed.
Problem & Motivation: The Nonlinearity of Light
In classical Maxwellian physics, photons do not interact with each other—they simply pass through one another. However, Quantum Electrodynamics (QED) allows for light-by-light scattering via virtual lepton or quark loops.
While this "nonlinear" property of light has been studied for a century, measuring it required the extreme environment of Ultra-Peripheral Collisions (UPCs) at the LHC, where heavy lead ions pass close enough for their electromagnetic fields to interact without hadronic contact. To match the increasing precision of ATLAS and CMS measurements, theoretical physics needed to move beyond NLO. The primary hurdle? The "algebraic explosion" of three-loop integrals and non-planar topologies that typically reach hundreds of gigabytes in raw form.
Methodology: Taming the Multi-Loop Beast
The authors broke the complexity using a multi-staged technical pipeline:
- Lorentz Tensor Decomposition: Using the 't Hooft-Veltman (tHV) scheme, they projected the amplitude onto a basis that matches the number of helicity configurations. This prevented the "over-fitting" of mathematical structures.
- Abelianisation: A clever trick where pure QCD building blocks (gluon exchanges) are mapped to QED (photon exchanges) by adjusting color factors and charges.
- Advanced Reductions: They used Integration-By-Parts (IBP) identities via
FinredandKira, combined with Finite-Field Reconstruction. Instead of tracking massive symbolic expressions, they evaluated results over prime fields and reconstructed the final analytic form.
Figure 1: Representative three-loop diagrams showing the complexity of photon-quark-gluon interactions.
Experimental Validation & Results
The final equations for the finite part of the amplitudes are "remarkably compact," stripping away the 1/ε poles of UV divergence to leave physically meaningful remainders.
- The K-Factor Surprise: Usually, higher-order corrections get smaller. Here, the NNLO corrections were roughly twice as large as the NLO ones in some kinematic regions (up to 3.5% enhancement). This stems from a specific new topology (see Fig 3d in the paper) that only appears at three loops.
- ATLAS Data Agreement: The di-photon invariant mass ($m_{\gamma\gamma}$) and rapidity ($y_{\gamma\gamma}$) distributions computed at NNLO show an excellent fit with current LHC data.
Figure 2: Differential cross-section relative to the di-photon invariant mass. Note the convergence of NNLO (blue) toward the experimental ATLAS data points.
Critical Insight & Future Outlook
The most striking takeaway is the weight drop in the mathematical functions. While general three-loop processes often require weight-6 polylogarithms, the all-plus helicity configuration simplified significantly. This suggests a hidden simplicity in the S-matrix for LbL scattering that we are only beginning to understand.
Limitations: The current study assumes massless quarks and leptons. At low invariant masses (near the production threshold of tau leptons or bottom quarks), the mass effects could become relevant. The authors identify this "massive-loop" calculation as the next logical step in the quest for sub-percent precision in the Standard Model.
Conclusion
This paper isn't just a record-breaking calculation; it's a demonstration that the tools of modern amplitude theory (Finite Fields, Syzygies, and MPLs) are now mature enough to handle the most demanding NNLO phenomenological tasks at the LHC.
