SYDNEY, Australia — Computer scientists Eleon Bach and Sophie Huiberts revealed a proof during the Dec. 14-17 IEEE Symposium on Foundations of Computer Science that delivers the strongest smoothed bound yet for the simplex method. By tightening the link between small input perturbations and the geometry the algorithm traverses, their work also yields a faster simplex method and shows the new noise dependence is essentially optimal, Dec. 22, 2025.
The simplex method is the decades-old workhorse for linear programming, the math behind everything from production plans to shipping routes. In geometric terms, it moves from corner to corner along the edges of a shape defined by constraints, hunting the best vertex — and it is famously fast in practice even though worst-case theory has long struggled to explain why.
One emblem of that mismatch is the 1972 Klee-Minty construction, which shows how certain pivot rules can push the simplex method into an exponential number of pivot steps, effectively forcing it to visit nearly every corner of a polytope before it finds the optimum.
The modern attempt to reconcile theory and practice began with Spielman and Teng’s 2001 smoothed-analysis paper, which asked what happens when every carefully crafted worst-case input is nudged by a little random noise. In that model, the brittle structure behind many pathological instances breaks, and a simplex method can be shown to run in polynomial time.
What the proof changes for the simplex method
Smoothed analysis tracks the expected number of pivot steps after adding Gaussian noise with standard deviation σ to an adversarial linear program with n constraints and d variables; smaller σ means less noise and a tougher promise. The tightest pre-2025 pivot bound came from Huiberts, Yin Tat Lee and Fangda Zhang in 2023, available via TheoretiCS, with a leading dependence of about O(σ-3/2 d13/4 log(n)7/4) pivot steps (plus suppressed logarithmic factors).
Bach and Huiberts push that dependence down to O(σ-1/2 d11/4 log(n)7/4) pivot steps in their new paper, alongside a matching lower bound indicating that no simplex method can beat that σ-scaling by more than logarithmic factors. The technical engine is a three-phase shadow-vertex strategy that measures progress through what the authors call a “semi-random shadow,” avoiding geometric alignments that previously blew up the step count.
Bach and Huiberts’ talk appeared Dec. 17 on the FOCS 2025 schedule, in a slot that underscored how central the simplex method remains to modern algorithm theory — even for an algorithm first deployed in the mid-20th century.
In an October feature for Quanta Magazine, Huiberts put the longstanding frustration plainly: “our traditional tools for studying algorithms don’t work.” The same story quotes her saying the new result shows “that we fully understand [this] model of the simplex method.”
For practitioners, the new proof is not a drop-in replacement for the commercial pivot rules that dominate today’s solvers. But for theory, it closes a long-running gap: in the most studied smoothed setting, the simplex method’s sensitivity to noise is now pinned down up to logarithms, giving researchers a clean target for whatever comes next — whether that is new models closer to real data or guarantees for the pivot rules software actually uses.

